| ▲ | john01dav 8 hours ago | ||||||||||||||||||||||||||||
> Wherever LLM-generated code is used, it becomes the responsibility of the engineer. As part of this process of taking responsibility, self-review becomes essential: LLM-generated code should not be reviewed by others if the responsible engineer has not themselves reviewed it. Moreover, once in the loop of peer review, generation should more or less be removed: if code review comments are addressed by wholesale re-generation, iterative review becomes impossible. My general procedure for using an LLM to write code, which is in the spirit of what is advocated here, is: 1) First, feed in the existing relevant code into an LLM. This is usually just a few source files in a larger project 2) Describe what I want to do, either giving an architecture or letting the LLM generate one. I tell it to not write code at this point. 3) Let it speak about the plan, and make sure that I like it. I will converse to address any deficiencies that I see, and I almost always do. 4) I then tell it to generate the code 5) I skim & test the code to see if it's generally correct, and have it make corrections as needed 6) Closely read the entire generated artifact at this point, and make manual corrections (occasionally automatic corrections like "replace all C style casts with the appropriate C++ style casts" then a review of the diff) The hardest part for me is #6, where I feel a strong emotional bias towards not doing it, since I am not yet aware of any errors compelling such action. This allows me to operate at a higher level of abstraction (architecture) and remove the drudgery of turning an architectural idea into written, precise, code. But, when doing so, you are abandoning those details to a non-deterministic system. This is different from, for example, using a compiler or higher level VM language. With these other tools, you can understand how they work and rapidly have a good idea of what you're going to get, and you have robust assurances. Understanding LLMs helps, but thus not to the same degree. | |||||||||||||||||||||||||||||
| ▲ | ryandrake 6 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
I've found that your step 6 takes the vast majority of the time I spend programming with LLMs. Like 10X+ the combined total of time steps 1-5 take. And that's if the code the LLM produced actually works. If it doesn't work (which happens quite often), then even more handholding and corrections are needed. It's really a grind. I'm still not sure whether I am net saving time using these tools. I always wonder about the people who say LLMs save them so much time: Do you just accept the edits they make without reviewing each and every line? | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | ec109685 6 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
Don’t make manual corrections. If you keep all edits to be driven by the LLM, you can use that knowledge later in the session or ask your model to commit the guidelines to long term memory. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||