▲ | jpc0 6 days ago | |||||||||||||||||||||||||
> Or the simpler version of the above: LLM writes code. You try and compile it. You get an error message and you paste that back into the LLM and let it have another go. That's been the main way I've worked with LLMs for almost three years now. I’m going to comment here about this but it’s a follow on to the other comment, this is exactly the workflow I was following. I had given it the compiler error and it blamed an environment issue, I confirmed the environment is as it claims it should be, it linked to documentation that doesn’t state what it claims is stated. In a coding agent this would have been an endless feedback loop that eats millions of tokens. This is the reason why I do not use coding agents, I can catch hallucinations and stop the feedback loop from ever happening in the first place without needing to watch an AI agent try to convince itself that it is correct and the compiler must be wrong. | ||||||||||||||||||||||||||
▲ | elliotto 6 days ago | parent [-] | |||||||||||||||||||||||||
You're responding to simonw, the guy who could be considered the single leading voice in practical applications of llms, with an anecdote about how one time the bot gave you a compiler error, and llm coding is therefore useless. | ||||||||||||||||||||||||||
|