| ▲ | krapp 2 days ago | |
Because LLMs are stochastic text-generation machines. The are designed to generate plausible natural human language based on next token prediction, the result of which coincidentally may or may not be true based on the likely correctness and quality of their data set. But that correctness (or lack thereof) comes from the human effort that produced the training data, not some innate ability of the LLM to comprehend real-world context and deduce truth from falsehood, because LLMs don't have anything of the sort. Not because they're people. https://medium.com/@nirdiamant21/llm-hallucinations-explaine... | ||
| ▲ | sminchev 2 days ago | parent | next [-] | |
True, true, true. I don't argue with that. But we can make a good comparison and analogy to explain the behavior easy, with less technical terms. AI can start hallucinating, if it deals with a lot, and/or complex data ;) If I deal with so much I will start hallucinating myself :D That was the point :) | ||
| ▲ | twoelf 2 days ago | parent | prev [-] | |
Yes, exactly. That’s why it feels so strange in practice. It can mimic understanding well enough to get you moving, but when the project gets deep enough, you find out it was generating plausibility, not actually holding the system in its context. | ||