| ▲ | wakawaka28 2 days ago | |
I don't see why two LLMs together (or one, alternating between tasks) could not separately develop a spec and an implementation. The human input could be a set of abstract requirements, and both systems interact and cross-check each other to meet the abstract requirements, perhaps with some code/spec reviews by humans. I really don't see it ever working without one or more humans in the loop, if only to confirm that what is being done is actually what the human(s) intended. The humans would ideally be able to say as little as possible to get what they want. Unless/until we get powerful AGI, we will need to have technical human reviewers. | ||
| ▲ | danaris 2 days ago | parent [-] | |
> I really don't see it ever working without one or more humans in the loop, if only to confirm that what is being done is actually what the human(s) intended. That is precisely my point. | ||