| ▲ | sminchev 2 days ago | |||||||||||||
Why do they hallucinate? :) | ||||||||||||||
| ▲ | forinti 17 hours ago | parent | next [-] | |||||||||||||
They don't. They work as intended and "hallucination" is actually a marketing term to make it seem they are more than what they really are: text prediction software. | ||||||||||||||
| ▲ | krapp 2 days ago | parent | prev [-] | |||||||||||||
Because LLMs are stochastic text-generation machines. The are designed to generate plausible natural human language based on next token prediction, the result of which coincidentally may or may not be true based on the likely correctness and quality of their data set. But that correctness (or lack thereof) comes from the human effort that produced the training data, not some innate ability of the LLM to comprehend real-world context and deduce truth from falsehood, because LLMs don't have anything of the sort. Not because they're people. https://medium.com/@nirdiamant21/llm-hallucinations-explaine... | ||||||||||||||
| ||||||||||||||