| ▲ | mikkupikku 7 hours ago | |||||||||||||||||||
I think that's true, but something even more subtle is going on. The quality of the LLM output depends on how it was prompted in a way more profound than I think most people realize. If you prompt the LLM using jargon and lingo that indicate you are already well experienced with the domain space, the LLM will rollplay an experienced developer. If you prompt it like you're a clueless PHB who's never coded, the LLM will output shitty code to match the style of your prompt. This extends to architecture, if your prompts are written with a mature understanding of the architecture that should be used, the LLM will follow suit, but if not then the LLM will just slap together something that looks like it might work, but isn't well thought out. | ||||||||||||||||||||
| ▲ | simonask 4 hours ago | parent [-] | |||||||||||||||||||
This is magical thinking. LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking. | ||||||||||||||||||||
| ||||||||||||||||||||