▲ | dns_snek 4 days ago | |||||||
Such is the nature of probabilistic systems. Generally speaking, LLMs read the top N search results on the topic in question and uncritically summarize them in their answer. Emphasis on uncritically, therefore the quality of LLM answers is strongly correlated with the quality of top search results. Relevant blog post: https://housefresh.com/beware-of-the-google-ai-salesman/ | ||||||||
▲ | simonw 4 days ago | parent [-] | |||||||
This is why I am so excited about the way GPT-5 uses its search tool. GPT-4o and most other AI-assisted search systems in the past worked how you describe: they took the top 10 search results and answered uncritically based on those. If the results were junk the answer was too. GPT-5 Thinking doesn't do that. Take a look at the thinking trace examples I linked to - in many of them it runs a few searches, evaluates the results, finds that they're not credible enough to generate an answer and so continues browsing and searching. That's why many of the answers take 1-2 minutes to return! I frequently see it dismiss information from social media and prefer to go to a source with a good reputation for fact-checking (like a credible newspaper) instead. | ||||||||
|