| ▲ | bob1029 16 hours ago | |||||||
I have my best successes by keeping things constrained to method-level generation. Most of the things I dump into ChatGPT look like this:
I think generating more than one method at a time is playing with fire. Individual methods can be generated by the LLM and tested in isolation. You can incrementally build up and trust your understanding of the problem space by going a little bit slower. If the LLM is operating over a whole set of methods at once, it is like starting over each time you have to iterate. | ||||||||
| ▲ | theshrike79 22 minutes ago | parent | next [-] | |||||||
"Dumping into ChatGPT" is by far the worst way to work with LLMs, then it lacks the greater context of the project and will just give you the statistical average output. Using an agentic system that can at least read the other bits of code is more efficient than copypasting snippets to a web page. | ||||||||
| ▲ | samdoesnothing 15 hours ago | parent | prev [-] | |||||||
I do this but with copilot. Write a comment and then spam opt-tab and 50% of the time it ends up doing what I want and I can read it line-by-line before tabbing the next one. Genuine productivity boost but I don't feel like it's AI slop, sometimes it feels like its actually reading my mind and just preventing me from having to type... | ||||||||
| ||||||||