| ▲ | munksbeer 2 hours ago | ||||||||||||||||||||||
I guide the AI. If I see it produce stuff that I think can be done better, I either just do it myself or point it in the right direction. It definitely doesn't do a good job of spotting areas ripe of building abstractions, but that is our job. This thing does the boring parts, and I get to use my creativity thinking how to make the code more elegant, which is the part I love. As far as I can tell, what's not to love about that? | |||||||||||||||||||||||
| ▲ | Nextgrid 2 hours ago | parent [-] | ||||||||||||||||||||||
If you’re repeatedly prompting, I will defer to my usual retort when it comes to LLM coding: programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language. It’s generally much faster for me to write the terse language directly than play a game of telephone with an intermediary in the verbose language for it to (maybe) translate my intentions into the terse language. In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you. Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe. | |||||||||||||||||||||||
| |||||||||||||||||||||||