| ▲ | viraptor 11 hours ago | |||||||||||||
It's not something that suddenly changed. "I'll generate some code" is as nondeterministic as "I'll look for a library that does it", "I'll assign John to code this feature", or "I'll outsource this code to a consulting company". Even if you write yourself, you're pretty nondeterministic in your results - you're not going to write exactly the same code to solve a problem, even if you explicitly try. | ||||||||||||||
| ▲ | Night_Thastus 4 hours ago | parent | next [-] | |||||||||||||
No? If I use a library, I know it will do the same thing from the same inputs, every time. If I don't understand something about its behavior, then I can look to the documentation. Some are better about this, some are crap. But a good library will continuing doing what I want years or decades later. An LLM can't decide between one sentence and the next what to do. | ||||||||||||||
| ||||||||||||||
| ▲ | skydhash 11 hours ago | parent | prev | next [-] | |||||||||||||
Contrary to code generation, all the other examples have one common point which is the main advantage, which is the alignment between your objective and their actions. With a good enough incentive, they may as well be deterministic. When you order home delivery, you don’t care about by who and how. Only the end result matters. And we’ve ensured that reliability is good enough that failures are accidents, not common occurrence. Code generation is not reliable enough to have the same quasi deterministic label. | ||||||||||||||
| ▲ | leshow 5 hours ago | parent | prev [-] | |||||||||||||
It's not the same, LLM's are qualitatively different due to the stochastic and non-reproducible nature of their output. From the LLM's point of view, non-functional or incorrect code is exactly the same as correct code because it doesn't understand anything that it's generating. When a human does it, you can say they did a bad or good job, but there is a thought process and actual "intelligence" and reasoning that went into the decisions. I think this insight was really the thing that made me understand the limitations of LLMs a lot better. Some people say when it produces things that are incorrect or fabricated it is "hallucinating", but the truth is that everything it produces is a hallucination, and the fact it's sometimes correct is incidental. | ||||||||||||||
| ||||||||||||||