▲ | majormajor 3 days ago | |
> It is extremely important to identify the most important task the LLM needs to perform and write out the algorithm for it. Try to role-play as the LLM and work through examples, identify all the decision points and write them explicitly. It helps if this is in the form of a flow-chart. I get lost a bit at things like this, from the link. The lessons in the article match my experience with LLMs and tools around them (see also: RAG is a pain in the ass and vector embedding similarity is very far from a magic bullet), but the takeaway - write really good prompts instead of writing code - doesn't ring true. If I need to write out all the decision points and steps of the change I'm going to make, why am I not just doing it myself? Especially when I have an editor that can do a lot of automated changes faster/safer than grep-based text-first tooling? If I know the language the syntax isn't an issue; if I don't know the language it's harder to trust the output of the model. (And if I 90% know the language but have some questions, I use an LLM to plow through the lines I used to have to go to Google for - which is a speedup, but a single-digit-percentage one.) My experience is that the tools fall down pretty quickly because I keep trying to make them to let me skip the details of every single task. That's how I work with real human coworkers. And then something goes sideways. When I try to pseudocode the full flow vs actually writing the code I lose the speed advantage, and often end up with a nasty 80%-there-but-I-don't-really-know-how-to-fix-the-other-20%-without-breaking-the-80% situation because I noticed a case I didn't explicitly talk about that it guessed wrong on. So then it's either slow and tedious or `git reset` and try again. (99% of these issues go away when doing greenfield tooling or scripts for operations or prototyping, which is what the vast majority of compelling "wow" examples I've seen have been, but only applies to my day job sometimes.) |