Remix.run Logo
bediger4000 2 days ago

I do not. I prefer to read the primary sources, LLM summaries are, after all, probabilistic, and based on syntax. I'm often looking for semantics, and an LLM really really is not going to give me that.

crazygringo 2 days ago | parent | next [-]

Funny, I use LLM's for so much search now because they understand my query semantically, not just its syntax. Keyword matching fails completely for certain types of searching.

balder1991 2 days ago | parent [-]

Also weirdly LLMs like ChatGPT can give good sources that usually wouldn’t be at the top of a Google query.

matwood 2 days ago | parent [-]

There’s a particular Italian government website and the only way I can find it is through ChatGPT. It’s a sub site under another site and I assume it’s the context of my question that surfaces the site when Google wouldn’t.

sothatsit 2 days ago | parent | prev | next [-]

Tools like GPT-5 Thinking are actually pretty great at linking you to primary sources. It has become my go-to search tool because even though it is slower, the results are better. Especially for things like finding documentation.

I basically only use Google for "take me to this web page I already know exists" queries now, and maps.

Rohansi 2 days ago | parent [-]

> pretty great at linking you to primary sources

Do you check all of the sources though? Those can be hallucinated and you may not notice unless you're always checking them. Or it could have misunderstood the source.

It's easy to assume it's always accurate when it generally is. But it's not always.

matwood 2 days ago | parent | next [-]

> It's easy to assume it's always accurate when it generally is. But it's not always.

So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day.

johannes1234321 2 days ago | parent | next [-]

It's a question on judgement on the individual case.

A documentation for a specific product I expect to be mostly right, but maybe miss the required detail.

Some blog, by some author I haven't heard about I trust less.

Some third party sites I give some trust, some less.

AI is a mixed bag, while always implying authority on the subject. (While becoming submissive when corrected)

Rohansi 2 days ago | parent | prev [-]

It's a marketing issue. LLMs are being marketed similar to Tesla's FSD - claims of PhD-level intelligence, AGI, artificial superintelligence, etc. set the expectation that LLMs should be smarter than (most of) us. Why would we have any reason to doubt the claims of something that is smarter than us? Especially when it is very confident about the way it is saying it.

matwood 2 days ago | parent [-]

That's fair. The LLM hype has been next level, but it's only rivaled by the 'it never works for anything and will make you stupid' crowd.

Both are wrong in my experience.

sothatsit 2 days ago | parent | prev [-]

I have noticed it hallucinating links when it can't find any relevant documentation at all, but otherwise it is pretty good. And yes, I do check them.

The type of search you are doing probably matters a lot here as well. I use it to find documentation for software I am already moderately familiar with, so noticing the hallucinations is not that difficult. Although, hallucinations are pretty rare for this type of "find documentation for XYZ thing in ABC software" query. Plus, it usually doesn't take very long to verify the information.

I did get caught once by it mentioning something was possible that wasn't, but out of probably thousands of queries I've done at this point, that's not so bad. Saying that, I definitely don't trust LLMs in any cases where information is subjective. But when you're just talking about fact search, hallucination rates are pretty low, at least for GPT-5 Thinking (although still non-zero). That said, I have also run into a number of problems where the documentation is out-of-date, but there's not much an LLM could do about that.

the_duke 2 days ago | parent | prev | next [-]

Gemini 2.5 always provides a lot of references, without being prompted to do so.

ChatGPT 5 also does, especially with deep research.

pas 2 days ago | parent | prev | next [-]

it's not syntax, it's data driven (yes of course syntax contributes to that)

https://freedium.cfd/https://vinithavn.medium.com/from-multi...

At its core, attention operates through three fundamental components — queries, keys, and values — that work together with attention scores to create a flexible, context-aware vector representation.

    Query (Q): The query is a vector that represents the current token for which the model wants to compute attention.

    Key (K): Keys are vectors that represent the elements in the context against which the query is compared, to determine the relevance.

    Attention Scores: These are computed using Query and Key vectors to determine the amount of attention to be paid to each context token.

    Value (V): Values are the vectors that represent the actual contextual information. After calculating the attention scores using Query and Key vectors, these scores are applied against Value vectors to get the final context vector
throwaway314155 2 days ago | parent | prev | next [-]

ChatGPT provides sources for a lot of queries, particularly if you ask. I'm not defending it, but you can get what claim to want in an easier interface than Google.

hackinthebochs 2 days ago | parent | prev | next [-]

That Searlesque syntax/semantics dichotomy isn't as clear cut as it once was. Yes, programs operate syntactically. But when semantics is assigned to particular syntactic structures, as it is with word embeddings, the computer is then able to operate on semantics through its facility with syntax. These old standard thought patterns need to be reconsidered in the age of LLMs.

whycome 2 days ago | parent | prev | next [-]

Since when does google give your primary sources for simple queries? You have to wade through all the garbage. At least an LLM will give you the general path and provide sources.

blinding-streak a day ago | parent [-]

Google's AI responses cite primary sources.

scarface_74 2 days ago | parent | prev [-]

ChatGPT gives you web citations from real time web searches.