| ▲ | jkubicek 5 days ago |
| > I could essentially replace it with Google for basic to slightly complex fact checking. I know you probably meant "augment fact checking" here, but using LLMs for answering factual questions is the single worst use-case for LLMs. |
|
| ▲ | rich_sasha 4 days ago | parent | next [-] |
| I disagree. Some things are hard to Google, because you can't frame the question right. For example you know context and a poor explanation of what you are after. Googling will take you nowhere, LLMs will give you the right answer 95% of the time. Once you get an answer, it is easy enough to verify it. |
| |
| ▲ | mrandish 4 days ago | parent | next [-] | | I agree. Since I'm recently retired and no longer code much, I don't have much need for LLMs but refining a complex, niche web search is the one thing where they're uniquely useful to me. It's usually when targeting the specific topic involves several keywords which have multiple plain English meanings that return a flood of erroneous results. Because LLMs abstract keywords to tokens based on underlying meaning, you can specify the domain in the prompt it'll usually select the relevant meanings of multi-meaning terms - which isn't possible in general purpose web search engines. So it helps narrow down closer to the specific needle I want in the haystack. As other posters said, relying on LLMs for factual answers to challenging questions is error prone. I just want the LLM to give me the links and I'll then assess veracity like a normal web search. I think a web search interface allowed disambiguating multi-meaning keywords might be even better. | | |
| ▲ | yojo 4 days ago | parent [-] | | I’ll give you another use: LLMs are really good at unearthing the “unknown unknowns.” If I’m learning a new topic (coding or not) summarizing my own knowledge to an LLM and then asking “what important things am I missing” almost always turns up something I hadn’t considered. You’ll still want to fact check it, and there’s no guarantee it’s comprehensive, but I can’t think of another tool that provides anything close without hours of research. | | |
| ▲ | elictronic 4 days ago | parent [-] | | Coworkers and experts in a field. I can trust them much more but the better they are the less access you have. |
|
| |
| ▲ | LoganDark 4 days ago | parent | prev | next [-] | | > Some things are hard to Google, because you can't frame the question right. I will say LLMs are great for taking an ambiguous query and figuring out how to word it so you can fact check with secondary sources. Also tip-of-my-tongue style queries. | |
| ▲ | bloudermilk 4 days ago | parent | prev | next [-] | | If you’re looking for a possibly correct answer to an obscure question, that’s more like fact finding. Verifying it afterward is the “fact checking” step of that process. | |
| ▲ | crote 4 days ago | parent | prev | next [-] | | A good part of that can probably be attributed to how terrible Google has gotten over the years, though. 15 years ago it was fairly common for me to know something exists, be able to type the right combination of very specific keywords into Google, and get the exact result I was looking for. In 2025 Google is trying very hard to serve the most profitable results instead, so it'll latch onto a random keyword, completely disregard the rest, and serve me whatever ad-infested garbage it thinks is close enough to look relevant for the query. It isn't exactly hard to beat that - just bring back the 2010 Google algorithm. It's only a matter of time before LLMs will go down the same deliberate enshittification path. | |
| ▲ | KronisLV 4 days ago | parent | prev | next [-] | | > For example you know context and a poor explanation of what you are after. Googling will take you nowhere, LLMs will give you the right answer 95% of the time. This works nicely when the LLM has a large knowledgebase to draw upon (formal terms for what you're trying to find, which you might not know) or the ability to generate good search queries and summarize results quickly - with an actual search engine in the loop. Most large LLM providers have this, even something like OpenWebUI can have search engines integrated (though I will admit that smaller models kinda struggle, couldn't get much useful stuff out of DuckDuckGo backed searches, nor Brave AI searches, might have been an obscure topic). | |
| ▲ | littlestymaar 4 days ago | parent | prev [-] | | It's not the LLM alone though, it's “LLM with web search”, and as such 4o isn't really a leap at all there (IIRC perplexity was using an early Llama version and was already very good, long before OpenAI adding web search to ChatGPT). |
|
|
| ▲ | mkozlows 4 days ago | parent | prev | next [-] |
| Modern ChatGPT will (typically on its own; always if you instruct it to) provide inline links to back up its answers. You can click on those if it seems dubious or if it's important, or trust it if it seems reasonably true and/or doesn't matter much. The fact that it provides those relevant links is what allows it to replace Google for a lot of purposes. |
| |
| ▲ | pram 4 days ago | parent | next [-] | | It does citations (Grok and Claude etc do too) but I've found when I read the source on some stuff (GitHub discussions and so on) it sometimes actually has nothing to do with what the LLM said. I've actually wasted a lot of time trying to find the actual spot in a threaded conversation where the example was supposedly stated. | | |
| ▲ | sarchertech 4 days ago | parent [-] | | Same experience with Google search AI. The links frequently don’t support the assertions, they’ll just say something that might show up in a google search for the assertion. For example if I’m asking about whether a feature exists in some library, the AI says yes it does and links to a forum where someone is asking the same question I did, but no one answered (this has happened multiple times). | | |
| ▲ | Nemi 4 days ago | parent [-] | | It is funny, Perplexity seems to work much better in this use case for me. When I want some sort of "conclusive answer", I use Gemini pro (just what I have available). It is good with coding and formulating thoughts, rewriting text, so on. But when I want to actually search for content on the web for, say, product research or opinions on a topic, Perplexity is so much better than either Gemini or google search AI. It lists reference links for each block of assertions that are EASILY clicked on (unlike Gemini or search AI, where the references are just harder to click on for some reason, not the least of which is that they OPEN IN THE SAME TAB where Perplexity always opens on a new tab). This is often a reddit specific search as I want people's opinions on something. Perplexity's UI for search specifically is the main thing it does just so much better than google's offering is the one thing going for it. I think there is some irony there. Full disclosure, I don't use Anthropic or OpenAI, so this may not be the case for those products. |
|
| |
| ▲ | platevoltage 4 days ago | parent | prev [-] | | In my experience, 80% of the links it provides are either 404, or go to a thread on a forum that is completely unrelated to the subject. Im also someone who refuses to pay for it, so maybe the paid versions do better. who knows. | | |
| ▲ | cout 4 days ago | parent | next [-] | | The 404 links are truly bizarre. Nearly every link to github.com seems to be 404. That seems like something that should be trivial for a tool to verify. | | |
| ▲ | weatherlite 4 days ago | parent | next [-] | | > The 404 links are truly bizarre. Nearly every link to github.com seems to be 404. That seems like something that should be trivial for a tool to verify.
reply Same issue with Gemini. Intuitively I'd also assume it's trivial to fix but perhaps there's more going on than we think. Perhaps validating every part of a response is a big overhead both financially and might even throw off the model and make it less accurate in other ways. | |
| ▲ | platevoltage 4 days ago | parent | prev [-] | | Yeah. The fact that I can't ask ChatGPT for a source makes the tool way less useful. It will straight up say "I verified all of these links" too. | | |
| ▲ | mh- 4 days ago | parent [-] | | As you identified, not paying for it is a big part of the issue. Running these things is expensive, and they're just not serving the same experience to non-paying users. One could argue this is a bad idea on their part, letting people get a bad taste of an inferior product. And I wouldn't disagree, but I don't know what a sustainable alternative approach is. | | |
| ▲ | xigoi 4 days ago | parent | next [-] | | Surely the cost of sending a few HTTP requests and seeing if they 404 is negligible compared to AI inference. | |
| ▲ | platevoltage 3 days ago | parent | prev [-] | | I would have no issue if the free version of ChatGPT told me straight up “You gotta pay for links and sources”. It doesn’t do that. | | |
| ▲ | mh- 14 hours ago | parent [-] | | 100% agree with that, as I alluded to in my last sentence. And that honestly seems like it might be a good product strategy in the short term. |
|
|
|
| |
| ▲ | mkozlows 4 days ago | parent | prev [-] | | That's a thing I've experienced, but not remotely at 80% levels. | | |
| ▲ | platevoltage 3 days ago | parent [-] | | It might have been the subject I was researching being insanely niche. I was using it to help me fix an arcade CRT monitor from the 80’s that wasn’t found in many cabinets that made it to the USA. It would spit out numbers that weren’t on the schematic, so I asked for context. |
|
|
|
|
| ▲ | password54321 5 days ago | parent | prev | next [-] |
| This was true before it could use search. Now the worst use-case is for life advice because it will contradict itself a 100 times over while sounding confident each time on life-altering decisions. |
|
| ▲ | SirHumphrey 4 days ago | parent | prev | next [-] |
| Most of the value I got from google was just becoming aware that something exists. LLM-s do far better in this regard. Once I know something exists it's usually easy enough to use traditional search to find official documentation or a more reputable source. |
|
| ▲ | oldsecondhand 4 days ago | parent | prev | next [-] |
| The most useful feature of LLMs is giving sources (with URL preferably). It can cut through a lot of SEO crap, and you still get to factcheck just like with a Google search. |
| |
| ▲ | sefrost 4 days ago | parent | next [-] | | I like using LLMs and I have found they are incredibly useful writing and reviewing code at work. However, when I want sources for things, I often find they link to pages that don't fully (or at all) back up the claims made. Sometimes other websites do, but the sources given to me by the LLM often don't. They might be about the same topic that I'm discussing, but they don't seem to always validate the claims. If they could crack that problem it would be a major major win for me. | | |
| ▲ | joegibbs 4 days ago | parent [-] | | It would be difficult to do with a raw model, but a two-step method in a chat interface would work - first the model suggests the URLs, tool call to fetch them and return the actual text of the pages, then the response can be based on that. | | |
| ▲ | mh- 4 days ago | parent [-] | | I prototyped this a couple months ago using OpenAI APIs with structured output. I had it consume a "deep thought" style output (where it provides inline citations with claims), and then convert that to a series of assertions and a pointer to a link that supposedly supports the assertion. I also split out a global "context" (the original meaning) paragraph to provide anything that would help the next agents understand what they're verifying. Then I fanned this out to separate (LLM) contexts and each agent verified only one assertion::source pair, with only those things + the global context and some instructions I tuned via testing. It returned a yes/no/it's complicated for each one. Then I collated all these back in and enriched the original report with challenges from the non-yes agent responses. That's as far as I took it. It only took a couple hours to build and it seemed to work pretty well. |
|
| |
| ▲ | IgorPartola 4 days ago | parent | prev [-] | | From what I have seen, a lot of what it does is read articles also written by AI or forum posts with all the good and bad that comes with that. |
|
|
| ▲ | cm2012 4 days ago | parent | prev | next [-] |
| They outperform asking humans, unless you are asking an expert. On average |
| |
| ▲ | lottin 4 days ago | parent [-] | | When I have a question, I don't usually "ask" that question and expect an answer. I figure out the answer. I certainly don't ask the question to a random human. | | |
| ▲ | gnerd00 4 days ago | parent [-] | | you ask yourself .. for most people, that means closer to average reply, from yourself, when you try to figure it out. There is a working paper from McKinnon Consulting in Canada that states directly that their definition of "General AI" is when the machine can match or exceed fifty percent of humans who are likely to be employed for a certain kind of job. It implies that low-education humans are the test for doing many routine jobs, and if the machine can beat 50% (or more) of them with some consistency, that is it. | | |
| ▲ | lottin 3 days ago | parent [-] | | By definition the average answer will be average, that's kind of a tautology. The point is that figuring things out is an essential intellectual skill. Figuring things out will make you smarter. Having a machine figure things out for you will make you dumber. By the way, doing a better job than the average human is NOT a sign of intelligence. Through history we have invented plenty of machines that are better at certain tasks than us. None of them are intelligent. |
|
|
|
|
| ▲ | yieldcrv 4 days ago | parent | prev | next [-] |
| It covers 99% of my use cases. And it is googling behind the scenes in ways I would never think to query and far faster. When I need to cite a court case, well the truth is I'll still use GPT or a similar LLM, but I'll scrutinize it more and at the bare minimum make sure the case exists and is about the topic presented, before trying to corroborate the legal strategy with a new context window, different LLM, google, reddit, and different lawyer. At least I'm no longer relying on my own understanding, and what 1 lawyer procedurally generates for me. |
|
| ▲ | Spivak 5 days ago | parent | prev | next [-] |
| It doesn't replace legitimate source funding but LLM vs the top Google results is no contest which is more about Google or the current state of the web than the LLMs at this point. |
|
| ▲ | lightbendover 4 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | marsven_422 4 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | simianwords 5 days ago | parent | prev [-] |
| Disagree. You have to try really hard and go very niche and deep for it to get some fact wrong. In fact I'll ask you to provide examples: use GPT 5 with thinking and search disabled and get it to give you inaccurate facts for non niche, non deep topics. Non niche meaning: something that is taught at undergraduate level and relatively popular. Non deep meaning you aren't going so deep as to confuse even humans. Like solving an extremely hard integral. Edit: probably a bad idea because this sort of "challenge" works only statistically not anecdotally. Still interesting to find out. |
| |
| ▲ | malfist 5 days ago | parent | next [-] | | Maybe you should fact check your AI outputs more if you think it only hallucinates in niche topics | | |
| ▲ | simianwords 5 days ago | parent [-] | | The accuracy is high enough that I don't have to fact check too often. | | |
| ▲ | platevoltage 4 days ago | parent | next [-] | | I totally get that you meant this in a nuanced way, but at face value it sort of reads like... Joe Rogan has high enough accuracy that I don't have to fact check too often.
Newsmax has high enough accuracy that I don't have to fact check too often, etc. If you accept the output as accurate, why would fact checking even cross your mind? | | |
| ▲ | gspetr 4 days ago | parent | next [-] | | Not a fan of that analogy. There is no expectation (from a reasonable observer's POV) of a podcast host to be an expert at a very broad range of topics from science to business to art. But there is one from LLMs, even just from the fact that AI companies diligently post various benchmarks including trivia on those topics. | |
| ▲ | simianwords 4 days ago | parent | prev [-] | | Do you question everything your dad says? | | |
| |
| ▲ | collingreen 5 days ago | parent | prev | next [-] | | Without some exploratory fact checking how do you estimate how high the accuracy is and how often you should be fact checking to maintain a good understanding? | | |
| ▲ | simianwords 4 days ago | parent [-] | | I did initial tests so that I don't have to do it anymore. | | |
| ▲ | jibal 4 days ago | parent | next [-] | | Everyone else has done tests that indicate that you do. | | |
| ▲ | glenstein 4 days ago | parent [-] | | And this is why you can't use personal anecdotes to settle questions of software performance. Comment sections are never good at being accountable for how vibes-driven they are when selecting which anecdotes to prefer. |
| |
| ▲ | malfist 4 days ago | parent | prev [-] | | If there's one thing that's constant it's that these systems change. |
|
| |
| ▲ | mvdtnz 4 days ago | parent | prev [-] | | If you're not fact checking it how could you possibly know that? |
|
| |
| ▲ | JustExAWS 5 days ago | parent | prev [-] | | I literally just had ChatGPT create a Python program and it used .ends_with instead of .endswith. This was with ChatGPT 5. I mean it got a generic built in function of one of the most popular languages in the world wrong. | | |
| ▲ | simianwords 5 days ago | parent [-] | | "but using LLMs for answering factual questions" this was about fact checking. Of course I know LLM's are going to hallucinate in coding sometimes. | | |
| ▲ | JustExAWS 5 days ago | parent [-] | | So it isn’t a “fact” that the built in Python function that tests whether a string ends with a substring is “endswith”? See https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect If you know that a source isn’t to be believed in an area you know about, why would you trust that source in an area you don’t know about? Another funny anecdote, ChatGPT just got the Gell-Man effect wrong. https://chatgpt.com/share/68a0b7af-5e40-8010-b1e3-ee9ff3c8cb... | | |
| ▲ | simianwords 5 days ago | parent | next [-] | | It got it right with thinking which was the challenge I posed.
https://chatgpt.com/share/68a0b897-f8dc-800b-8799-9be2a8ad54... | | |
| ▲ | OnlineGladiator 4 days ago | parent [-] | | The point you're missing is it's not always right. Cherry-picking examples doesn't really bolster your point. Obviously it works for you (or at least you think it does), but I can confidently say it's fucking god-awful for me. | | |
| ▲ | glenstein 4 days ago | parent | next [-] | | >The point you're missing is it's not always right. That was never their argument. And it's not cherry picking to make an argument that there's a definable of examples where it returns broadly consistent and accurate information that they invite anyone to test. They're making a legitimate point and you're strawmanning it and randomly pointing to your own personal anecdotes, and I don't think you're paying attention to the qualifications they're making about what it's useful for. | |
| ▲ | simianwords 4 days ago | parent | prev [-] | | Am I really the one cherry picking? Please read the thread. | | |
| ▲ | OnlineGladiator 4 days ago | parent [-] | | Yes. If someone gives an example of it not working, and you reply "but that example worked for me" then you're cherry picking when it works. Just because it worked for you does not mean it works for other people. If I ask ChatGPT a question and it gives me a wrong answer, ChatGPT is the fucking problem. | | |
| ▲ | simianwords 4 days ago | parent [-] | | The poster didn't use "thinking" model. That was my original challenge!! Why don't you try the original prompt using thinking model and see if I'm cherry picking? | | |
| ▲ | OnlineGladiator 4 days ago | parent [-] | | Every time I use ChatGPT I become incredibly frustrated with how fucking awful it is. I've used it more than enough, time and time again (just try the new model, bro!), to know that I fucking hate it. If it works for you, cool. I think it's dogshit. | | |
| ▲ | simianwords 4 days ago | parent | next [-] | | Share your examples so that it can be useful to everyone | |
| ▲ | glenstein 4 days ago | parent | prev | next [-] | | They just spent like six comments imploring you to understand that they were making a specific point: generally reliable on non-niche topics using thinking mode. And that nuance bounced off of you every single time as you keep repeating it's not perfect, dismiss those qualifications as cherry picking and repeat personal anecdotes. I'm sorry but this is a lazy and unresponsive string of comments that's degrading the discussion. | | |
| ▲ | OnlineGladiator 4 days ago | parent [-] | | The neat thing about HN is we can all talk about stupid shit and disagree about what matters. People keep upvoting me, so I guess my thoughts aren't unpopular and people think it's adding to the discussion. I agree this is a stupid comment thread, we just disagree about why. | | |
| ▲ | glenstein 3 days ago | parent [-] | | Again, they were making a specific argument with specific qualifications and you weren't addressing their point as stated. And your objections such as they are would be accounted for if you were reading carefully. You seem more to be completely missing the point than expressing a disagreement so I don't agree with your premise. |
|
| |
| ▲ | ninetyninenine 4 days ago | parent | prev | next [-] | | Objectively he didn't cherry pick. He responded to the person and it got it right when he used the "thinking" model WHICH he did specify in his original comment. Why don't you stick to the topic rather than just declaring it's utter dog shit. Nobody cares about your "opinion" and everyone is trying to converge on a general ground truth no matter how fuzzy it is. | | |
| ▲ | OnlineGladiator 4 days ago | parent [-] | | All anybody is doing here is sharing their opinion unless you're quoting benchmarks. My opinion is just as useless as yours, it's just some find mine more interesting and some find yours more interesting. How do you expect to find a ground truth from a non-deterministic system using anecdata? | | |
| ▲ | glenstein 3 days ago | parent [-] | | This isn't a people having different opinions thing, this is you overlooking specific caveats and talking past comments that you're not understanding. They weren't cherry picking, and they made specific qualifications about the circumstances where it behaves as expected, and your replies keep losing track of those details. | | |
| ▲ | OnlineGladiator 2 days ago | parent [-] | | And I think you're completely missing the point. And you say this comment thread is a waste and yet you keep replying. What exactly are you trying to accomplish here? Do you think repeating yourself for a fifth time is going to achieve something? | | |
| ▲ | glenstein 2 days ago | parent [-] | | The difference is I can name specific things that you are in fact demonstrably ignoring, and already did name them. You're saying you just have a different opinion, in an attempt to mirror the form of my criticism, but you can't articulate a comparable distinction and you're not engaging with the distinction I'm putting forward. | | |
| ▲ | OnlineGladiator 2 days ago | parent [-] | | So your goal here is to say the same thing over and over again and hope I eventually give the affirmation you so desperately need? You've already declared that you're right multiple times. Nobody cares but you. https://xkcd.com/386/ You might want to develop a sense of humor. You'll enjoy life more. | | |
| ▲ | glenstein a day ago | parent [-] | | My goal is to invite you to think critically about the specific caveats in the comment you are replying to instead of ignoring those caveats. They said that generally speaking using thinking mode on non niche topics they can get reliable answers, and invited anyone who disagreed with it to offer examples where it fails to perform as expected, a constructive structure for counter examples in case anyone disagreed. You basically ignored all of those specifics, and spuriously accused them of cherry picking when they weren't, and now you don't want to take responsibility for your own words and are using this conversation as a workshopping session for character attacks in hopes that you can make the conversation about something else. | | |
| ▲ | OnlineGladiator a day ago | parent [-] | | As I've said many times before, I am aware of everything you have said. I just don't care. You seem to be really upset that someone on the internet disagrees with you. And from my perspective, you are the one that has no self-awareness and is completely missing the point. You don't even understand the conversation we're having and yet you're constantly condescending. I'm sure if you keep repeating yourself though I'll change my mind. | | |
| ▲ | glenstein 9 hours ago | parent [-] | | Simianwords said: "use GPT 5 with thinking and search disabled and get it to give you inaccurate facts for non niche, non deep topics" and noted that mistakes were possible, but rare. JustExAWS replied with an example of getting Python code wrong and suggested it was a counter example. Simianwords correctly noted that their comment originally said thinking mode for factual answers on non-niche topics and posted a link that got the python answer right with thinking enabled. That's when you entered, suggesting that Simian was "missing" the point that GPT (not distinguishing thinking or regular mode), was "not always right". But they had already acknowledged multiple times that it was not always right. They said the accuracy was "high enough", noted that LLMs get coding wrong, and reiterating that their challenge was specifically about thinking mode. You, again without acknowledging the criteria they had noted previously, insisted this was cherry picking, missing the point that they were actually being consistent from the beginning, inviting anyone to give an example showing otherwise. At no point between then and here have you demonstrated an awareness of this criteria despite your protestations to the contrary. Instead of paying attention to any of the details you're insulting me and retreating into irritated resentment. | | |
| ▲ | OnlineGladiator 8 hours ago | parent [-] | | Thank you for repeating yourself again. It's really hammering home the point. Please, continue. |
|
|
|
|
|
|
|
|
| |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
|
|
|
|
|
| |
| ▲ | cdrini 4 days ago | parent | prev [-] | | I sometimes feel like we throw around the word fact too often. If I misspell a wrd, does that mean I have committed a factual inaccuracy? Since the wrd is explicitly spelled a certain way in the dictionary? |
|
|
|
|