▲ | sanitycheck 18 hours ago | |
1 Is easy enough for trivial tasks but in a complex (typically horrible) production codebase nearly all the work is investigation and debugging. However good the initial prompt is, soon the context becomes flooded with log output and code and the LLM goes off the rails quite quickly. Doing 2 well is the AI babysitting mentioned in the article. Of course you can stop it every minute and tell it to do something else, then watch it like a hawk to make sure it does it right, then clear context when it ignores you and makes the mistake you told it not to make. But that is often then slower than just doing the work yourself to begin with, probably leading to the findings we've all seen that LLM use is actually reducing productivity. I think living with crappy AI code is the price we currently have to pay for getting development done quicker. Maybe in a year it'll have improved enough that we can ask it to clean up all the mess it made. (Possibly I just have higher standards than most, other humans can be quite bad too.) | ||
▲ | fcpguru 18 hours ago | parent [-] | |
"all the work is investigation and debugging" - Yes! Exactly you can ask the AI a bunch of questions first and really dig into what the codebase currently does. Then spend the time crafting that prompt that explains how to surgically do what needs to be done. If you are watching it every min like a hawk you are doing it wrong. You need to watch it more like a VERY smart junior dev and trust but verify. I'm not saying it's easy to get good at these new skill sets. But simply throwing your hands up and saying "I'll just walk everywhere vs using a bicycle" isn't a strategy that's going to work well. |