| ▲ | satellite2 4 hours ago | ||||||||||||||||
Aren't you just moving the problem a little bit further? If you can't trust it will implement carefully specified features, why would you believe it would properly review those? | |||||||||||||||||
| ▲ | frde_me 2 hours ago | parent [-] | ||||||||||||||||
It's hard to explain, but I've found LLMs to be significantly better in the "review" stage than the implementation stage. So the LLM will do something and not catch at all that it did it badly. But the same LLM asked to review against the same starting requirement will catch the problem almost always The missing thing in these tools is that automatic feedback loop between the two LLMs: one in review mode, one in implementation mode. | |||||||||||||||||
| |||||||||||||||||