| ▲ | sothatsit 2 days ago |
| The problem is that SEO has made it hard to find trustworthy sites in the first place. The places I trust the most now for getting random information is Reddit and Wikipedia, which is absolutely ridiculous as they are terrible options. But SEO slop machines have made it so hard to find the good websites without putting in more legwork than makes sense a lot of the time. Funnily enough, this makes AI look like a good option to cut through all the noise despite its hallucinations. That's obviously not acceptable when it comes to food safety concerns though. |
|
| ▲ | omnicognate 2 days ago | parent | next [-] |
| If I do that search on Google right now, the top result is the National Pork Board (pork.org): ad-free, pop-up free, waffle-free and with the correct answer in large font at the top of the page. It's in F, but I always stick " C" at the end of temperature queries. In this case that makes the top result foodsafety.gov which is equally if not more authoritative, also ad-, waffle-, and popup- free and with with the answer immediately visible. Meanwhile the AI overview routinely gives me completely wrong information. There's zero chance I'm going to trust it when a wrong answer can mean I give my family food poisoning. I agree that there is a gigaton of crap out there, but the quality information sources are still there too. Google's job is to list those at the top and it actually has done so this time, although I'll acknowledge it doesn't always and I've taken to using Kagi in preference for this reason. A crappy AI preview that can't be relied on for anything isn't an acceptable substitute. |
| |
| ▲ | pasc1878 2 days ago | parent [-] | | Kagi sort of gets this correct. Kagi search gives the pork board first as well. But note that site fails mtkd's requirements giving temperature in degrees Fahrenheit and not Celsius. The second hit does give a correct temperature but has a cookie banner (which at least can be rejected with one click) The optional Kagi assistant quotes the pork board, usda which also is only in Fahrenheit, and third a blog on site for a thermometer that quoted the UK Food Standard Authority and gives its temperature However there is a problem the UK FSA does not agree with USDA on the temperature it puts it higher at 70 degrees C rather than 63 So if you get the USDA figure you are taking a risk.
The Kagi Assistant gives both temperatures but it is not clear which one is correct although both figures are correctly linked to the actual sites. | | |
| ▲ | omnicognate 2 days ago | parent [-] | | I don't really see the problem with F and C. As I mentioned, I always stick " C" on the end of temperature queries. It's 2 characters and the results always have the centigrade temps, on both Kagi and Google. | | |
| ▲ | pasc1878 2 days ago | parent [-] | | The OPs main complaint was lack of C when that is the temperature scale used in their country | | |
| ▲ | omnicognate 2 days ago | parent [-] | | Of course. What else would I think they were complaining about? I also live in a country that uses C. That's why I always stick " C" on the end of temperature queries. It would be nice if they automatically prioritised those results, but that's a search engine improvement and nobody's working on those any more [1]. A half-arsed AI summary that can't be trusted to get the actual temperature right certainly doesn't solve it. [1] Except Kagi, and even they're distracted by the AI squirrel. | | |
| ▲ | AlecSchueler a day ago | parent [-] | | The point is that the AI just gives you the answer without you having to concern yourself with what measurement system they use in the US. | | |
| ▲ | omnicognate a day ago | parent [-] | | As I said, it routinely gives incorrect data so it can't be relied on for something that matters, like a safe cooking temperature. Note that we're talking about the Google AI Summary here, not AI in general. Whatever magical capabilities you think your favoured model has or will soon have, the Google AI Summary is currently utter garbage and routinely spouts nonsense. (Don't try and persuade me otherwise. I have to use Google at work so I see its lies first hand every day.) | | |
| ▲ | AlecSchueler 15 hours ago | parent [-] | | I think the point is that the convenience outweighs the accuracy for now! I just look it up with AI then overcook it to be safe. | | |
| ▲ | omnicognate 4 hours ago | parent [-] | | What was the point in looking it up then? You know, at "I'd rather overcook my food than click the top result on my search", I think I'm done. |
|
|
|
|
|
|
|
|
|
| ▲ | jordanb 2 days ago | parent | prev | next [-] |
| Google could have cut down on this if they wanted. And in general they did until they fired Matt Cutts. The reality is, every time someone's search is satisfied by an organic result is lost revenue for Google. |
| |
| ▲ | taurath 2 days ago | parent [-] | | Which is the stupidest position ever if Google wants to exist long term. Unfortunately there are no workable alternatives. DDG is somehow not better, though I use it to avoid trackers. | | |
| ▲ | Miraste 2 days ago | parent | next [-] | | It's a bit like the Easter Islanders cutting down all of their trees for wood. Where does Google management think they'll get search results if they kill the entire internet? Has anyone at Google thought that far ahead? | | |
| ▲ | 9dev 2 days ago | parent [-] | | The internet they dream of is like a large mall. It consists of service providers selling you something, and Google directing you to them in exchange for some of the profit. The role of users in this model is that of a Piñata that everyone hits on to drop some money. |
| |
| ▲ | what 2 days ago | parent | prev [-] | | DDG is just serving you remixed bing and yandex results. There’s basically no alternative to GBY that do their own crawling and maintain their own index. | | |
| ▲ | Zardoz84 2 days ago | parent [-] | | qwant ? | | |
| ▲ | touisteur 2 days ago | parent [-] | | Qwant also has an AI overview. Pretty bad too. | | |
| ▲ | frm88 2 days ago | parent [-] | | I've been using noai.duckduckgo.com for a few weeks now and it's pretty reliable. Still yandex etc. but at least no longer AI overview. (Yes, I know about settings, but they get deleted every restart). |
|
|
|
|
|
|
| ▲ | tonyedgecombe 2 days ago | parent | prev | next [-] |
| >The problem is that SEO has made it hard to find trustworthy sites in the first place. We should remember that's partly Google's fault as well. They decided SEO sites were OK. |
| |
| ▲ | FredPret 2 days ago | parent [-] | | Well, they decided which sites were OK, and then people SEO'd a bunch of crap into Google's idea of a good website. I'm no fan of Google but it's not so simple to figure out what's relevant, good content on the scale of the internet, while confronted by an army of adversarial actors who can make money by working out what you value in a site. | | |
| ▲ | sothatsit 2 days ago | parent [-] | | It is a game of whack-a-mole in some sense, but Google also isn't swinging the mallet very fast. |
|
|
|
| ▲ | al_borland 2 days ago | parent | prev | next [-] |
| AI is being influenced by all that noise. It isn’t necessarily going to an authoritative source, it’s looking at Reddit and some SEO slop and using that to come up with the answer. We need AI that’s trained exclusively on verified data and not random websites and internet comments. |
| |
| ▲ | jval43 2 days ago | parent | next [-] | | I asked Gemini about some Ikea furniture dimensions and it gave seemingly correct answers, until it suddenly didn't make sense. Turns out all the information it gave me came from old Reddit posts and lots of it was factually wrong. Gemini however still linked some official Ikea pages as the "sources". It'll straight up lie to you and then hide where it actually got it's info from. Usually Reddit. | |
| ▲ | sothatsit 2 days ago | parent | prev | next [-] | | Creating better datasets would also help to improve the performance of the models, I would assume. Unfortunately, the costs to produce high-quality datasets of a sufficient size seem prohibitive today. I'm hopeful this will be possible in the future though, maybe using a mix of 1) using existing LLMs to help humans filter the existing internet-scale datasets, and/or 2) finding some new breakthroughs to make model training more data efficient. | |
| ▲ | heavyset_go 2 days ago | parent | prev [-] | | It'll still hallucinate |
|
|
| ▲ | 2 days ago | parent | prev [-] |
| [deleted] |