Remix.run Logo
hamdingers 4 days ago

I feel the opposite. Before I can use information from a model's "internal" knowledge I have to engage in independent research to verify that it's not a hallucination.

Having an LLM generate search strings and then summarize the results does that research up front and automatically, I need only click the sources to verify. Kagi Assistant does this really well.

beefnugs 4 days ago | parent [-]

So does anyone have any good examples of it effectively avoiding the blogspam and SEO? Or being fooled by it? How often either way?

coffeefirst 4 days ago | parent | next [-]

Bulk search is the only thing where I’ve been consistently impressed with LLMs.

But, like the parent, I’m using the Kagi assistant.

So the answer here might be “search for 5 things and pull the relevant results” works incredibly well, but first you have to build an extremely good search engine that lets the user filter out spam sites.

That said, this isn’t magic, it’s just automated an hour of googling. If the content doesn’t exist you won’t find it.

15123123aa 4 days ago | parent | prev | next [-]

I find one thing it doesn't do very well is avoiding marketing articles pushed by a brand itself. e.g. if I search is X better than Y, very likely landing on articles by makers of brand X and Y and not a 3rd party reviewer. When I manually search on Google I can spot marketing articles just by the URL.

simonw 4 days ago | parent [-]

Have you tried that with GPT-5 Thinking or is this based on your experience with older versions of ChatGPT + search?

simonw 4 days ago | parent | prev [-]

Here's a good article about Google AI mode usually managing to spot and avoid social media misinformation but occasionally falling for it: https://open.substack.com/pub/mikecaulfield/p/is-the-llm-res...