▲ | Rohansi 2 days ago | ||||||||||||||||||||||
> pretty great at linking you to primary sources Do you check all of the sources though? Those can be hallucinated and you may not notice unless you're always checking them. Or it could have misunderstood the source. It's easy to assume it's always accurate when it generally is. But it's not always. | |||||||||||||||||||||||
▲ | matwood 2 days ago | parent | next [-] | ||||||||||||||||||||||
> It's easy to assume it's always accurate when it generally is. But it's not always. So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | sothatsit 2 days ago | parent | prev [-] | ||||||||||||||||||||||
I have noticed it hallucinating links when it can't find any relevant documentation at all, but otherwise it is pretty good. And yes, I do check them. The type of search you are doing probably matters a lot here as well. I use it to find documentation for software I am already moderately familiar with, so noticing the hallucinations is not that difficult. Although, hallucinations are pretty rare for this type of "find documentation for XYZ thing in ABC software" query. Plus, it usually doesn't take very long to verify the information. I did get caught once by it mentioning something was possible that wasn't, but out of probably thousands of queries I've done at this point, that's not so bad. Saying that, I definitely don't trust LLMs in any cases where information is subjective. But when you're just talking about fact search, hallucination rates are pretty low, at least for GPT-5 Thinking (although still non-zero). That said, I have also run into a number of problems where the documentation is out-of-date, but there's not much an LLM could do about that. |