Remix.run Logo
sminchev 2 days ago

Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.

Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump.

Other models are just asking too many questions...

There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts.

krapp 2 days ago | parent [-]

>Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.

LLMs don't hallucinate because they get overwhelmed and tired JFC.

sminchev 2 days ago | parent [-]

Why do they hallucinate? :)

forinti 17 hours ago | parent | next [-]

They don't. They work as intended and "hallucination" is actually a marketing term to make it seem they are more than what they really are: text prediction software.

krapp 2 days ago | parent | prev [-]

Because LLMs are stochastic text-generation machines. The are designed to generate plausible natural human language based on next token prediction, the result of which coincidentally may or may not be true based on the likely correctness and quality of their data set. But that correctness (or lack thereof) comes from the human effort that produced the training data, not some innate ability of the LLM to comprehend real-world context and deduce truth from falsehood, because LLMs don't have anything of the sort.

Not because they're people.

https://medium.com/@nirdiamant21/llm-hallucinations-explaine...

sminchev 2 days ago | parent | next [-]

True, true, true. I don't argue with that. But we can make a good comparison and analogy to explain the behavior easy, with less technical terms.

AI can start hallucinating, if it deals with a lot, and/or complex data ;) If I deal with so much I will start hallucinating myself :D

That was the point :)

twoelf 2 days ago | parent | prev [-]

Yes, exactly. That’s why it feels so strange in practice. It can mimic understanding well enough to get you moving, but when the project gets deep enough, you find out it was generating plausibility, not actually holding the system in its context.