▲ | deadbabe 2 days ago | ||||||||||||||||||||||||||||
> The overviews are also wrong and difficult to get fixed. Let’s not pretend that some websites aren’t straight up bullshit. There’s blogs spreading bullshit, wrong info, biased info, content marketing for some product etc. And lord knows comments are frequently wrong, just look around Hackernews. I’d bet that LLMs are actually wrong less often than typical search results, because they pull from far greater training data. “Wisdom of the crowds”. | |||||||||||||||||||||||||||||
▲ | Miraste 2 days ago | parent | next [-] | ||||||||||||||||||||||||||||
I've found that AI Overview is wrong significantly more often than other LLMs, partly because it is not retrieving answers from its training data (the rest because it's a cheap garbage LLM). There is no "wisdom of the crowds." Instead, it's trying to parse the Google search results, in order to answer with a source. And it's much worse at pulling the right information from a webpage than a human, or even a high-end LLM. | |||||||||||||||||||||||||||||
▲ | washadjeffmad 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
>I’d bet that LLMs are actually wrong less often than typical search results, because they pull from far greater training data. “Wisdom of the crowds”. Is that relevant when we already have official truth sources: our websites? That information is ours and subject to change at our sole discretion. Google doesn't get to decide who our extensions are assigned to, what our hours of operation are, or what our business services do. Our initial impression of AI Overview was positive, as well, until this happened to us. And bear in mind the timeline. We didn't know that this was happening, and even after we realized there was a trend, we didn't know why. We're in the middle of a softphone transition, so we initially blamed ourselves (and panicked a little when what we saw didn't reflect what we assumed was happening - why would people just suddenly start calling wrong numbers?). After we began collecting responses from misdirected callers and got a nearly unanimous answer of "Google" (don't be proud of that), I called a meeting with our communications and marketing departments and web team to figure out how we'd log and investigate incidents so we could fix the sources. What they turned up was that the numbers had never been publicly published or associated with any of what Google AI was telling them. This wasn't our fault. So now we're concerned that bad info is being amplified elsewhere on the web. We even considered pulling back the Google-advertised phone extensions so they forward either to a message that tells them Google AI was wrong and to visit our website, or admit defeat and just forward it where Google says it should go (subject to change at Google's pleasure, obviously). We can't do this for established public facing numbers, though, and disrupt business services. What a stupid saga, but that's how it works when Google treats the world like its personal QA team. (OT, but bince we're all working for them by generating training data for their models and fixing their global scale products, anyone for Google-sponsored UBI?) | |||||||||||||||||||||||||||||
▲ | Scarblac 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
But when my site is wrong about me, it's my fault and I can fix it if I care. If Google shows bullshit about me on the top of its search, I'm helpless. (for me read any company, person, etc) | |||||||||||||||||||||||||||||
▲ | mvdtnz 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||
When asking a question do you not see a difference between 1. Here's the answer (but it's misinformation) 2. Here are some websites that look like they might have the answer ? | |||||||||||||||||||||||||||||
|