| ▲ | leshow 5 hours ago | |
It's not the same, LLM's are qualitatively different due to the stochastic and non-reproducible nature of their output. From the LLM's point of view, non-functional or incorrect code is exactly the same as correct code because it doesn't understand anything that it's generating. When a human does it, you can say they did a bad or good job, but there is a thought process and actual "intelligence" and reasoning that went into the decisions. I think this insight was really the thing that made me understand the limitations of LLMs a lot better. Some people say when it produces things that are incorrect or fabricated it is "hallucinating", but the truth is that everything it produces is a hallucination, and the fact it's sometimes correct is incidental. | ||
| ▲ | viraptor 3 hours ago | parent | next [-] | |
I'm not sure who generates random code without a goal or checking if it works afterwards. Smells like a straw man. Normally you set the rules, you know how to validate if the result works, and you may even generate tests that keep that state. If I got completely random results rather than what I expect, I wouldn't be using that system - but it's correct and helpful almost every time. What you describe is just not how people work with LLMs in practice. | ||
| ▲ | sod22 4 hours ago | parent | prev [-] | |
Correct. The thing has no concept of true or false. 0 or 1. Therefore it cannot necessarily discern between two statements that are practically identical in the eyes of humans. This doesnt make the technology useless but its clearly not some AGI nonsense. | ||