Remix.run Logo
quantummagic 3 days ago

> It’s a lossy encyclopedia that can lie to and manipulate you.

So can a traditional encyclopedia.

mrweasel 3 days ago | parent | next [-]

We're at such a strange point where even school children knows that something like Wikipedia isn't necessarily factually correct and that you need to double check. They then go and ask ChatGPT, as if it wasn't trained on Wikipedia.

We haven't reached the stage yet where the majority of people are as sceptical of chatbots as they are of Wikipedia.

I get that even if people know not to trust a wiki, they might anyway, because, meh, good enough, but I still like us to move into a stage where the majority is at least somewhat aware that the chatbot might be wrong.

stego-tech 3 days ago | parent [-]

To be fair, most people aren’t even critical of Wikipedia. They read an article, consume its content, and believe themselves competent experts without digging into the sources, the papers, or the talk pages for discourse and dissent.

Giving LLMs credibility as “lossless encyclopedias” is tacit approval of further dumbing-down of humanity through answer engines instead of building critical thinking skills.

mrweasel 3 days ago | parent [-]

No, I agree that most people aren't critical (critical enough) of Wikipedia. My point is that many of them know that they should be.

stego-tech 3 days ago | parent | prev [-]

True, but in that case we call it “errors” or “propaganda”, depending on the context and source. Plus the steep costs of traditional encyclopedias, the need to refresh collections with new data periodically, and the role of librarians, all acted as a deterrent against lying (since they’re reference material).

Wikipedia can also lie, obviously, but it at least requires sources to be cited, and I can dig deeper into topics at my leisure or need in order to improve my knowledge.

I cannot do either with an LLM. It is not obligated to cite sources, and even if it is it can just make shit up that’s impossible to follow or leads back to AI-generated slop - self-referencing, in other words. It also doesn’t teach you (by default, and my opinions of its teaching skills are an entirely different topic), but instead gives you an authoritative answer in tone, but not in practice.

Normalizing LLMs as “lossy encyclopedias” is a dangerous trend in my opinion, because it effectively handwaves the need for critical thinking skills associated with research and complex task execution, something in sore supply in the modern, Western world.

simonw 3 days ago | parent [-]

> Normalizing LLMs as “lossy encyclopedias” is a dangerous trend in my opinion, because it effectively handwaves the need for critical thinking skills associated with research and complex task execution

Calling them "lossy encyclopedias" isn't intended as a compliment! The whole point of the analogy is to emphasize that using them in place of an encyclopedia is a bad way to apply them.

stego-tech 3 days ago | parent [-]

That might’ve been the author’s intent, but the comments in this thread (and downvotes of my opinions) suggest that a non-zero number of people believe that analogy to be the best justification yet for LLMs-as-answer-engines that shouldn’t be assailable by dissenters or critics.

So long as people are dumb enough to gleefully cede their expertise and sovereignty to a chatbot, I’ll keep desperately screaming into the void that they’re idiots for doing so.