▲ | ants_everywhere 4 days ago | ||||||||||||||||
Yeah this is what people are doing with LLMs every day. I don't quite get what is supposed to be different in the blog post. HN is a bit weird because it's got 99 articles about how evil LLMs are and one article that's like "oh hey I asked an LLM questions and got some answers" and people are like "wow amazing". Not that I mind. I assume Simon just wanted to share some cool nerdy stuff and there's nothing wrong with the blog post. It's just surprising that it's posted not once but twice on HN and is on the front page when there's so much anti-AI sentiment otherwise. | |||||||||||||||||
▲ | simonw 4 days ago | parent [-] | ||||||||||||||||
What's different is that LLMs with search tools used to be terrible - they would run a single search, get back 10 results and summarize those. Often the results were bad, so the answer was bad. GPT-5 Thinking (and o3 before it, but very few people tried o3) does a whole lot better then that. It runs multiple searches, then evaluates the results and runs follow-up searches to try to get to a credible result. This is new and worth writing about. LLM search doesn't suck any more. | |||||||||||||||||
|