Remix.run Logo
sothatsit 2 days ago

Tools like GPT-5 Thinking are actually pretty great at linking you to primary sources. It has become my go-to search tool because even though it is slower, the results are better. Especially for things like finding documentation.

I basically only use Google for "take me to this web page I already know exists" queries now, and maps.

Rohansi 2 days ago | parent [-]

> pretty great at linking you to primary sources

Do you check all of the sources though? Those can be hallucinated and you may not notice unless you're always checking them. Or it could have misunderstood the source.

It's easy to assume it's always accurate when it generally is. But it's not always.

matwood 2 days ago | parent | next [-]

> It's easy to assume it's always accurate when it generally is. But it's not always.

So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day.

johannes1234321 2 days ago | parent | next [-]

It's a question on judgement on the individual case.

A documentation for a specific product I expect to be mostly right, but maybe miss the required detail.

Some blog, by some author I haven't heard about I trust less.

Some third party sites I give some trust, some less.

AI is a mixed bag, while always implying authority on the subject. (While becoming submissive when corrected)

Rohansi 2 days ago | parent | prev [-]

It's a marketing issue. LLMs are being marketed similar to Tesla's FSD - claims of PhD-level intelligence, AGI, artificial superintelligence, etc. set the expectation that LLMs should be smarter than (most of) us. Why would we have any reason to doubt the claims of something that is smarter than us? Especially when it is very confident about the way it is saying it.

matwood 2 days ago | parent [-]

That's fair. The LLM hype has been next level, but it's only rivaled by the 'it never works for anything and will make you stupid' crowd.

Both are wrong in my experience.

sothatsit 2 days ago | parent | prev [-]

I have noticed it hallucinating links when it can't find any relevant documentation at all, but otherwise it is pretty good. And yes, I do check them.

The type of search you are doing probably matters a lot here as well. I use it to find documentation for software I am already moderately familiar with, so noticing the hallucinations is not that difficult. Although, hallucinations are pretty rare for this type of "find documentation for XYZ thing in ABC software" query. Plus, it usually doesn't take very long to verify the information.

I did get caught once by it mentioning something was possible that wasn't, but out of probably thousands of queries I've done at this point, that's not so bad. Saying that, I definitely don't trust LLMs in any cases where information is subjective. But when you're just talking about fact search, hallucination rates are pretty low, at least for GPT-5 Thinking (although still non-zero). That said, I have also run into a number of problems where the documentation is out-of-date, but there's not much an LLM could do about that.