Remix.run Logo
mikkupikku 7 hours ago

I think that's true, but something even more subtle is going on. The quality of the LLM output depends on how it was prompted in a way more profound than I think most people realize. If you prompt the LLM using jargon and lingo that indicate you are already well experienced with the domain space, the LLM will rollplay an experienced developer. If you prompt it like you're a clueless PHB who's never coded, the LLM will output shitty code to match the style of your prompt. This extends to architecture, if your prompts are written with a mature understanding of the architecture that should be used, the LLM will follow suit, but if not then the LLM will just slap together something that looks like it might work, but isn't well thought out.

simonask 4 hours ago | parent [-]

This is magical thinking.

LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking.

mikkupikku 30 minutes ago | parent | next [-]

I don't care if the machine has a soul, I only care what the machine can produce. With good prompting, the machine produces more ""thoughtful"" results. As an engineer, that's all I care about.

Tossrock 2 hours ago | parent | prev | next [-]

Tell Donald Knuth that: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...

Marha01 an hour ago | parent | prev [-]

It is magical thinking to claim that LLMs are definitely physically incapable of thinking. You don't know that. No one knows that, since such large neural networks are opaque blackboxes that resist interpretation and we don't really know how they function internally.

You are just repeating that because you read that before somewhere else. Like a stochastic parrot. Quite ironic. ;)