| ▲ | minimaxir 5 hours ago | |
> Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters. This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output. As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results: https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts) https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts) It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code. | ||