Could you clarify why you're arguing an agent needs to be a single model (really a single instance) to be "autonomous"?
Or rather, for example, why does the model that makes the kill decision need to be the same one that flies the drone? I imagine that is the use case the Pentagon is most interested in
Could you clarify why you're arguing an agent needs to be a single model (really a single instance) to be "autonomous"?
Or rather, for example, why does the model that makes the kill decision need to be the same one that flies the drone? I imagine that is the use case the Pentagon is most interested in