Remix.run Logo
interstice 2 days ago

The issue here is that LLM’s are not human and so having a human mental model of how to communicate doesn’t really work. If I communicate to my engineer to do X I know all kinds of things about them, like their coding style, strengths and weaknesses, and that they have some familiarity with the code they are working with and won’t bring the entirety of stack overflow answers to the context we are working in. LLM’s are nothing like this even when working with large amounts of context, they fail in extremely unpredictable ways from one prompt to the next. If you disagree I’d be interested in what stack or prompting you are using that avoids this.