| ▲ | nicole_express 10 hours ago |
| It's an odd thing here, because I don't really understand why this is LLM-specific at all. If someone came up to me and asked "who's the 6 Nimmt world champion?" I'd google it and probably find the same result, and have no reason not to believe it. I mean, for all I know the game is being made up too, though it has more sources at least. |
|
| ▲ | pmontra 4 hours ago | parent | next [-] |
| It is not LLM specific. The conclusion of the post states > The web was already being poisoned for search and link ranking long before LLMs existed. But it continues > We are now plugging generative models directly into that poisoned pipeline and asking them to reason confidently about “truth” on our behalf. So it's a shift from trust Google to trust the AI, which might be more insidious or not, depends on the individual attitude of each of us. |
| |
| ▲ | bambax an hour ago | parent [-] | | It's a shift but it's a little worse. Checking/auditing search results is easier and more ingrained; even if many people don't do it, everyone has been hit by spam at some point, everyone knows it exists. LLMs are the same thing but have an air of authority about them that a web search lacks, at least for now. |
|
|
| ▲ | SchemaLoad 8 hours ago | parent | prev | next [-] |
| The difference imo is removing the information from the source. Previously you'd use the source of the information to gauge how much you trust it. If it's a reddit post or a no name website you'd likely be skeptical if it doesn't seem backed up by better sources. But now the info is coming from an LLM that you generally trust to be knowledgeable. And the language it uses backs up this feeling. The OP post is highlighting how incredibly easy it is for a very small amount of information on the web to completely dictate the output of the LLM in to saying whatever you want. |
|
| ▲ | yen223 9 hours ago | parent | prev | next [-] |
| A lot of people seem to think this to be an LLM problem, but you're right. This is a general epistemological problem with relying on the Internet (or really, any piece of literature) as a source of truth |
| |
| ▲ | chneu 2 hours ago | parent [-] | | The LLM part of the "new" problem is the speed at which it can proliferate and the trust people seem to have in AI answers. Idk |
|
|
| ▲ | freakynit 5 hours ago | parent | prev | next [-] |
| Because outside of the tech community (in fact, many even inside of it), almost 100% of the folks consider what these chatgpt like tools answer as the truth without questioning it, or cross-verifying it even once. |
| |
| ▲ | hobofan 2 hours ago | parent [-] | | In that case most of the mitigations listed by the author don't help though (e.g. surfacing the source). That's also no different than traditional works with citations (be it Youtube videos or peer-reviewed academic papers), where anybody rarely verifies what's written in the cited sources. The only real alternatives would be: - Kicking off a deep research-like investigation for each simple query - Introducing a trusted middleman for sources, significantly cutting down the available information (e.g. restricting Wikipedia to locked-down/moderated pages) - Not having any information at all, as at some point you can rarely every verify anything depending on how hard your definition of "verify" is |
|
|
| ▲ | locallost 2 hours ago | parent | prev | next [-] |
| You would also find other results (this assumes what you're searching for is not a random made up thing). The issue with LLMs is IMHO bigger because it will give you answers as a matter of fact without any other consideration. |
|
| ▲ | refulgentis 9 hours ago | parent | prev [-] |
| Closed it after “This house of cards only needs a $12 domain!”, right under “Sorry, Wikipedia.”, right under their Wikipedia edit. |
| |
| ▲ | sdthjbvuiiijbb 8 hours ago | parent [-] | | It's also clearly AI generated writing. That doesn't help its credibility or interest. I'm extremely suspicious of people who use AI to write an ostensibly personal blog, for all the usual obvious reasons. | | |
| ▲ | apublicfrog 7 hours ago | parent [-] | | What are you basing that on? I'm usually pretty good at sniffing out AI writing, and it smells human to me. | | |
| ▲ | riffraff an hour ago | parent | next [-] | | I had the impression it was AI writing too because of the second half of the article. The first part looks genuine, the part since "trust laundering" smells fake: the scary single sentence followed by a whole paragraph of single clause sentences hints at AI. Perhaps we've all just become paranoid, but even if it's not LLMs writing this, it now puts me off. And the AI image at the top of the page does not help with the feeling. | |
| ▲ | chneu 2 hours ago | parent | prev | next [-] | | Agreed. Nothing about this post really stood out as AI. It didn't raise a single flag for me. I think calling something AI generated is just a lazy way of dismissing stuff nowadays. | |
| ▲ | malfist 7 hours ago | parent | prev [-] | | Why is agents (where the money is)? Fake profundity is abound in the post | | |
| ▲ | esquivalience 3 hours ago | parent [-] | | The author has been using parenthetical comments like that since at least 2017, judging by a review of old posts on that site. |
|
|
|
|