| ▲ | sminchev 2 days ago | |||||||||||||||||||||||||||||||||||||
Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors. Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump. Other models are just asking too many questions... There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts. | ||||||||||||||||||||||||||||||||||||||
| ▲ | krapp 2 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||
>Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors. LLMs don't hallucinate because they get overwhelmed and tired JFC. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||