| ▲ | atonse 8 hours ago | |||||||||||||||||||||||||||||||||||||||||||
My answer to this is to often get the LLMs to do multiple rounds of code review (depending on the criticality of the code, doing reviews on every commit. but this was clearly a zero-impact hobby project). They are remarkably good at catching things, especially if you do it every commit. | ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | usrbinbash 8 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
> My answer to this is to often get the LLMs to do multiple rounds of code review So I am supposed to trust the machine, that I know I cannot trust to write the initial code correctly, to somehow do the review correctly? Possibly multiple times? Without making NEW mistakes in the review process? Sorry no sorry, but that sounds like trying to clean a dirty floor by rubbing more dirt over it. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||