Remix.run Logo
latexr 3 days ago

A lossy encyclopaedia should be missing information and be obvious about it, not making it up without your knowledge and changing the answer every time.

When you have a lossy piece of media, such as a compressed sound or image file, you can always see the resemblance to the original and note the degradation as it happens. You never have a clear JPEG of a lamp, compress it, and get a clear image of the Milky Way, then reopen the image and get a clear image of a pile of dirt.

Furthermore, an encyclopaedia is something you can reference and learn from without a goal, it allows you to peruse information you have no concept of. Not so with LLMs, which you have to query to get an answer.

gjm11 3 days ago | parent | next [-]

Lossy compression does make things up. We call them compression artefacts.

In compressed audio these can be things like clicks and boings and echoes and pre-echoes. In compressed images they can be ripply effects near edges, banding in smoothly varying regions, but there are also things like https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres... where one digit is replaced with a nice clean version of a different digit, which is pretty on-the-nose for the LLM failure mode you're talking about.

Compression artefacts generally affect small parts of the image or audio or video rather than replacing the whole thing -- but in the analogy, "the whole thing" is an encyclopaedia and the artefacts are affecting little bits of that.

Of course the analogy isn't exact. That would be why S.W. opens his post by saying "Since I love collecting questionable analogies for LLMs,".

moregrist 3 days ago | parent | next [-]

> Lossy compression does make things up. We call them compression artefacts.

I don’t think this is a great analogy.

Lossy compression of images or signals tends to throw out information based on how humans perceive it, focusing on the most important perceptual parts and discarding the less important parts. For example, JPEG essentially removes high frequency components from an image because more information is present with the low frequency parts. Similarly, POTS phone encoding and mp3 both compress audio signals based on how humans perceive audio frequency.

The perceived degradation of most lossy compression is gradual with the amount of compression and not typically what someone means when they say “make things up.”

LLM hallucinations aren’t gradual and the compression doesn’t seem to follow human perception.

Vetch 3 days ago | parent | next [-]

You are right and the idea of LLMs as lossy compression has lots of problems in general (LLMs are a statistical model, a function approximating the data generating process).

Compression artifacts (which are deterministic distortions in reconstruction) are not the same as hallucinations (plausible samples from a generative model; even when greedy, this is still sampling from the conditional distribution). A better identification is with super-resolution. If we use a generative model, the result will be clearer than a normal blotchy resize but a lot of details about the image will have changed as the model provides its best guesses at what the missing information could have been. LLMs aren't meant to reconstruct a source even though we can attempt to sample their distribution for snippets that are reasonable facsimiles from the original data.

An LLM provides a way to compute the probability of given strings. Once paired with entropy coding, on-line learning on the target data allows us to arrive at the correct MDL based lossless compression view of LLMs.

baq 3 days ago | parent | prev [-]

LLM confabulations might as well be gradual in the latent space. I don’t think lossy is synonymous to perceptual and the high frequency components rather easily translate to less popular data.

latexr 3 days ago | parent | prev | next [-]

I feel like my comment is pretty clear that a compression artefact is not the same thing as making the whole thing up.

> Of course the analogy isn't exact.

And I don’t expect it to be, which is something I’ve made clear several times before, including on this very thread.

https://news.ycombinator.com/item?id=45101679

tsunamifury 2 days ago | parent [-]

More disagreeing with no meaningful value to the conversation. This is you. Constantly.

jpcompartir 3 days ago | parent | prev [-]

Interesting, in the LLM case these compression artefacts then get fed into the generating process of the next token, hence the errors compound.

ACCount37 3 days ago | parent [-]

Not really. The whole "inference errors will always compound" idea was popular in GPT-3.5 days, and it seems like a lot of people just never updated their knowledge since.

It was quickly discovered that LLMs are capable of re-checking their own solutions if prompted - and, with the right prompts, are capable of spotting and correcting their own errors at a significantly-greater-than-chance rate. They just don't do it unprompted.

Eventually, it was found that reasoning RLVR consistently gets LLMs to check themselves and backtrack. It was also confirmed that this latent "error detection and correction" capability is present even at base model level, but is almost never exposed - not in base models and not in non-reasoning instruct-tuned LLMs.

The hypothesis I subscribe to is that any LLM has a strong "character self-consistency drive". This makes it reluctant to say "wait, no, maybe I was wrong just now", even if latent awareness of "past reasoning look sketchy as fuck" is already present within the LLM. Reasoning RLVR encourages going against that drive and utilizing those latent error-correction capabilities.

jpcompartir 3 days ago | parent | next [-]

You seem to be responding to a strawman, and assuming I think something I don't think.

As of today, 'bad' generations early in the sequence still do tend towards responses that are distant to the ideal response. This is testable/verifiable by pre-filling responses, which I'd advise you to experiment with for yourself.

'Bad' generations early in the output sequence are somewhat mitigatable by injecting self-reflection tokens like 'wait', or with more sophisticated test-time compute techniques. However, those remedies can simultaneously turn 'good' generations into bad, they are post-hoc heuristics which treat symptoms not causes.

In general, as the models become larger they are able to compress more of their training data. So yes, using the terminology of the commenter I was responding to, larger models should tend to have fewer 'compression artefacts' than smaller models.

ACCount37 3 days ago | parent [-]

With better reasoning training, the models mitigate more and more of that entirely by themselves. They "diverge into a ditch" less, and "converge towards the right answer" more. They are able to use more and more test-time compute effectively. They bring their own supply of "wait".

OpenAI's in-house reasoning training is probably best in class, but even lesser naive implementations go a long way.

Mallowram 3 days ago | parent | prev [-]

The problem is that language doesn't produce itself. Re-checking, correcting error is not relevant. Error minimization is not the fount of survival, remaining variable for tasks is. The lossy encyclopedia is neither here nor there, it's a mistaken path:

"Language, Halliday argues, "cannot be equated with 'the set of all grammatical sentences', whether that set is conceived of as finite or infinite". He rejects the use of formal logic in linguistic theories as "irrelevant to the understanding of language" and the use of such approaches as "disastrous for linguistics"."

ACCount37 3 days ago | parent [-]

Sorry, what? This is borderline incoherent.

mallowdram 3 days ago | parent [-]

The units themselves are meaningless without context. The point of existence, action, tasks is to solve the arbitrariness in language. Tasks refute language, not the other way around. This may be incoherent as the explanation is scientific, based in the latest conceptualization of linguistics.

CS never solved the incoherence of language, conduit metaphor paradox. It's stuck behind language's bottleneck, and it do so willingly blind-eyed.

ACCount37 3 days ago | parent [-]

What? This is even less coherent.

You weren't talking to GPT-4o about philosophy recently, were you?

mallowdram 3 days ago | parent [-]

I'd know cutting-edge linguistics and signaling theory well beyond Shannon to parse this, not NLP or engineering reduction. What I've stated is extremely coherent to Systemic Functional Linguists.

Beyond this point engineers actually have to know what signaling is, rather than 'information.'

https://www.sciencedirect.com/science/article/abs/pii/S00033...

Ultimately, engineering chose the wrong approach to automating language, and it sinks the field. It's irreversible.

morpheos137 3 days ago | parent | next [-]

If not language what training substrate do you suggest? Also not strong ideas are expressible coherently. You have an ironic pattern in your comments of getting lost in the very language morass you propose to deprecate. If we don't train models on language what do we train them on? I have some ideas of my own but I am interested if you can clearly express yours.

mallowdram 3 days ago | parent [-]

Neural/spatial syntax. Analoga of differentials. The code to operate this gets built before the component.

If language doesn't really mean anything, then automating it in geometry is worse than problematic.

The solution is starting over at 1947: measurement not counting.

morpheos137 3 days ago | parent [-]

The semantic meaning of your words here is non-existent. It is unclear to me how else you can communicate in a text based forum if not by using words. Since you can't despite your best effort I am left to conclude you are psychotic and should probably be banned and seek medical help.

mallowdram 3 days ago | parent [-]

Engineers are so close-minded, you can't see the freight train bearing down on the industry. All to science's advantage replacing engineers. Interestingly, if you dissect that last entry, I've just made the case measurement (analog computation) is superior to counting (binary computation) and laid out the strategy how. All it takes is brains, or an LLM to decipher what it states.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3005627/

"First, cell assemblies are best understood in light of their output product, as detected by ‘reader-actuator’ mechanisms. Second, I suggest that the hierarchical organization of cell assemblies may be regarded as a neural syntax. Third, constituents of the neural syntax are linked together by dynamically changing constellations of synaptic weights (‘synapsembles’). Existing support for this tripartite framework is reviewed and strategies for experimental testing of its predictions are discussed."

morpheos137 3 days ago | parent [-]

I 100% agree analog computing would be better at simulating intelligence than binary. Why don't you state that rather than burying it under a mountain of psychobabble?

mallowdram 2 days ago | parent [-]

Listing the conditions, dichotomizing the frameworks counting/measurement is the farthest from psycho-babble. Anyone with knowledge of analog knows these terms. And enough to know analog doesn't simulate anything. And intelligence isn't what's being targeted.

ACCount37 3 days ago | parent | prev [-]

One of the main takeaways from The Bitter Lesson was that you should fire your linguists. GPT-2 knows more about human language than any linguist could ever hope to be able to convey.

If you're hitching your wagon to human linguists, you'll always find yourself in a ditch in the end.

mallowdram 3 days ago | parent [-]

Sorry, 2 billion years of neurobiology beats 60 years of NLP/LLMs which knows less to nothing about language since "arbitrary points can never be refined or defined to specifics" check your corners and know your inputs.

The bill is due on NLP.

ACCount37 2 days ago | parent [-]

Incoherent drivel.

Mallowram 2 days ago | parent [-]

[dead]

TomasBM 2 days ago | parent | prev | next [-]

I'd rather say LLMs are a lossy encyclopedia + other things. The other things part obviously does a lot of work here, but if we strip it away, we can claim that the remaining subset of the underlying network encodes true information about the world.

Purely based on language use, you could expect "dog bit the man" more often than "man bit the dog", which is a lossy way to represent "dogs are more likely to bite people than vice versa." And there's also the second lossy part where information not occurring frequently enough in the training data will not survive training.

Of course, other things also include inaccurate information, frequent but otherwise useless sentences (any sentence with "Alice" and "Bob"), and the heavily pruned results of the post-training RL stage. So, you can't really separate the "encyclopedia" from the rest.

Also, not sure if lossy always means that loss is distributed (i.e., lower resolution). Loss can also be localized / biased (i.e., lose only black pixels), it's just that useful lossy compression algorithms tend to minimize the noticeable loss. Tho I could be wrong.

gf000 3 days ago | parent | prev | next [-]

I don't think there is a singular "should" that fits every use case.

E.g. a Bloom filter also doesn't "know" what it knows.

latexr 3 days ago | parent [-]

I don’t understand the point you’re trying to make. The given example confused me further, since nothing in my argument is concerned with the tool “knowing” anything, that has no relation to the idea I’m expressing.

I do understand and agree with a different point you’re making somewhere else in this thread, but it doesn’t seem related to what you’re saying here.

https://news.ycombinator.com/item?id=45101946

mock-possum 3 days ago | parent | prev | next [-]

Yeah an LLM is an unreliable librarian, if anything.

latexr 2 days ago | parent [-]

That’s a much better analogy. You have to specifically ask them for information and they will happily retrieve it for you, but because they are unreliable they may get you the wrong thing. If you push back they’ll apologise and try again (librarians try to be helpful) but might again give you the wrong thing (you never know, because they are unreliable).

vrighter 2 days ago | parent [-]

There's a big difference between giving you correct information about the wrong thing, vs giving you incorrect information about the right thing.

A librarian might bring you the wrong book, that's the former. An LLM does the latter. They are not the same.

latexr 2 days ago | parent [-]

Fair. With the unreliable librarian you’d be at an advantage because you’d immediately see “this is not what I asked for”, which is not the case with LLMs (and hence what makes them so problematic).

petesergeant 2 days ago | parent | prev | next [-]

You are absolutely right, and exactly the same thing came into my head while reading this. Some of the replies to you here are very irritating and seem not to grasp the point you're making, so I thought I'd chime in for moral support.

Lerc 3 days ago | parent | prev | next [-]

The argument is that a banana is a squishy hammer.

You're saying hammers shouldn't be squishy.

Simon is saying don't use a banana as a hammer.

latexr 3 days ago | parent [-]

> You're saying hammers shouldn't be squishy.

No, that is not what I’m saying. My point is closer to “the words chosen to describe the made up concept do not translate to the idea being conveyed”. I tried to make that fit into your idea of the banana and squishy hammer, but now we’re several levels of abstraction deep using analogies to discuss analogies so it’s getting complicated to communicate clearly.

> Simon is saying don't use a banana as a hammer.

Which I agree with.

tsunamifury 3 days ago | parent [-]

This is the type of comment that has been killing HN lately. “I agree with you but I want to disagree because I’m generally just that type of person. Also I am unable to tell my disagreeing point adds nothing.”

latexr 3 days ago | parent [-]

Except that’s not what I’m saying at all. If anything, the “type of comment that has been killing HN” (and any community) are those who misunderstand and criticise what someone else says without providing any insight while engaging in ad hominem attacks (which are explicitly against the HN guidelines). It is profoundly ironic you are actively attacking others for the exact behaviour you are engaging in. I will kindly ask you do not do that. You are the first person in this immediate thread being rude and not adding to the collective understanding of the argument.

We are all free to agree with one part of an argument while disagreeing with another. That’s what healthy discourse is, life is not black and white. As way of example, if one says “apples are tasty because they are red”, it is perfectly congruent to agree apples are tasty but disagree that their colour is the reason. And by doing so we engage in a conversation to correct a misconception.

tsunamifury 2 days ago | parent [-]

More of the same

JustFinishedBSG 3 days ago | parent | prev | next [-]

I actually disagree. Modern encoding formats can, and do, hallucinate blocks.

It’s a lot less visible and I guess dramatic than LLMs but it happens frequently enough that I feel like at every major event there are false conspiracies based on video « proofs » that are just encoding artifacts

simonw 3 days ago | parent | prev | next [-]

I think you are missing the point of the analogy: a lossy encyclopedia is obviously a bad idea, because encyclopedias are meant to be reliable places to look up facts.

latexr 3 days ago | parent | next [-]

And my point is that “lossy” does not mean “unreliable”. LLMs aren’t reliable sources of facts, no argument there, but a true lossy encyclopaedia might be. Lossy algorithms don’t just make up and change information, they remove it from places where they might not make a difference to the whole. A lossy encyclopaedia might be one where, for example, you remove the images plus gramatical and phonetic information. Eventually you might compress the information where the entry for “dog” only reads “four legged creature”—which is correct but not terribly helpful—but you wouldn’t get “space mollusk”.

simonw 3 days ago | parent [-]

I don't think a "true lossy encylopedia" is a thing that has ever existed.

latexr 3 days ago | parent | next [-]

One could argue that’s what a pocket encyclopaedia (those exist) is. But even if we say they don’t, when you make up a term by mushing two existing words together it helps if the term makes sense. Otherwise, why even use the existing words? You called it a “lossy enyclopedia” and not a “spaghetti ice cream” for a reason, presumably so the term evokes an image or concept in the mind of the reader. If it’s bringing up a different image than what you intended, perhaps it’s not a good term.

I remember you being surprised when the term “vibe coding” deviated from its original intention (I know you didn’t come up with it). But frankly I was surprised at your surprise—it was entirely predictable and obvious how the term was going to be used. The concept I’m attempting to communicate to you is that when you make up a term you have to think not only of the thing in your head but also of the image it conjures up in other people’s minds. Communication is a two-way street.

nyeah 3 days ago | parent [-]

I think you're saying that "pocket encyclopedia" is one definition of "lossy encyclopedia" that may occur to people (or that may get marketed on purpose). But that's a very poor definition of LLMs. And so the danger is that people may lock onto a wildly misleading definition. Am I getting the point?

ianburrell 3 days ago | parent | prev | next [-]

All encyclopedias are lossy. They curate the info they include, only choosing important topics. Wikipedia is lossy. They delete whole articles for irrelevance. They edit changes to make them more concise. They require sources for facts. All good things, but Wikipedia is a subset of human knowledge.

prerok 2 days ago | parent | prev [-]

Since sibling comments all seem to have concentrated on idealistic good intent, I would also like to point out a different side of things.

I grew up in socialism. Since we've transitioned to democracy, I learned that I have to unlearn some things. Our encyclopedias were not inaccurate but were not complete. It's like lying through omission. And as the old saying goes, half-truths are worse than lies.

Whether this would be deemed as a lossy encyclopedia, I don't know. What I am certain of, however, is that it was accurate but omitted important additional facts.

And that is what I see in LLMs as well. Overall, it's accurate, except in cases where an additional fact would alter the conclusion. So, it either could not find arguments with that fact, or it chose to ignore them to give an answer and could be prompted into taking them into account or whatever.

What I do know is that LLMs of today give me the same hibbie-jibbies that rereading those encyclopedias of my youth give me.

baq 3 days ago | parent | prev | next [-]

A lossy encyclopedia which you can talk to and it can look up facts in the lossless version while having a conversation OTOH is... not a bad idea at all, and hundreds of millions of people agree if traffic numbers are to be believed.

(but it isn't and won't ever be an oracle and apparently that's a challenge for human psychology.)

simonw 3 days ago | parent [-]

Completely agree with you - LLMs with access to search tools that know how to use them (o3, GPT-5, Claude 4 are particularly good at this) mostly paper over the problems caused by a lossy set of knowledge in the model weights themselves.

But... end users need to understand this in order to use it effectively. They need to know if the LLM system they are talking to has access to a credible search engine and is good at distinguishing reliable sources from junk.

That's advanced knowledge at the moment!

johnecheck 3 days ago | parent | next [-]

From earlier today:

Me: How do I change the language settings on YouTube?

Claude: Scroll to the bottom of the page and click the language button on the footer.

Me: YouTube pages scroll infinitely.

Claude: Sorry! Just click on the footer without scrolling, or navigate to a page where you can scroll to the bottom like a video.

(Videos pages also scroll indefinitely through comments)

Me: There is no footer, you're just making shit up

Claude: [finally uses a search engine to find the right answer]

pbhjpbhj 3 days ago | parent [-]

IME, eventually, after a long time, the scrolling stops and you can get to the footer. YMMV!

gf000 3 days ago | parent | prev [-]

Slightly off topic, but my experience is that they are pretty terrible at using search tools..

They can often reason themselves into some very stupid direction, burning all the tokens for no reason and failing to reply in the end.

checkyoursudo 3 days ago | parent | prev | next [-]

I am sympathetic to your analogy. I think it works well enough.

But it falls a bit short in that encyclopedias, lossy or not, shouldn't affirmatively contain false information. The way I would picture a lossy encyclopedia is that it can misdirect by omission, but it would not change A to ¬A.

Maybe a truthy-roulette enclyclopedia?

tomrod 3 days ago | parent [-]

I guarantee every encyclopedia has mistakes.

Jensson 2 days ago | parent [-]

I remember a study where they checked if wikipedia had more errors than paper encyclopedias, and they found there were about as many errors in both.

That study ended the "you can't trust wikipedia" argument, you can't trust anything but wikipedia is an as good as it gets second hand reference.

butlike 3 days ago | parent | prev | next [-]

I don't like the confident hallucinations of LLMs either, but don't they rewrite and add entries in the encyclopedia every few years? Implicitly that makes your old copy "lossy"

Again, never really want a confidently-wrong encyclopedia, though

rynn 3 days ago | parent | prev [-]

Aren't all encyclopedias 'lossy'? They are all partial collections of information; none have all of the facts.

prerok 2 days ago | parent [-]

There's an important difference as to what is omitted.

An encyclopedia could say "general relativity is how the universe works" or it could say "general relativity and quantum mechanics describe how we understand the universe today and scientists are still searching for universal theory".

Both are short but the first statement is omitting important facts. Lossy in the sense of not explaining details is ok, but omitting swathes of information would be wrong.

TacticalCoder 3 days ago | parent | prev | next [-]

> You never have a clear JPEG of a lamp, compress it, and get a clear image of the Milky Way, then reopen the image and get a clear image of a pile of dirt.

Oh but it's much worse than that: because most LLMs aren't deterministic in the way they operate [1], you can get a pristine image of a different pile of dirt every single time you ask.

[1] there are models where if you have the "model + prompt + seed" you're at least guaranteed to get the same output every single time. FWIW I use LLMs but I cannot integrate them in anything I produce when what they output ain't deterministic.

ACCount37 3 days ago | parent | next [-]

"Deterministic" is overrated.

Computers are deterministic. Most of the time. If you really don't think about all the times they aren't. But if you leave the CPU-land and go out into the real world, you don't have the privilege of working with deterministic systems at all.

Engineering with LLMs is closer to "designing a robust industrial process that's going to be performed by unskilled minimum wage workers" than it is to "writing a software algorithm". It's still an engineering problem - but of the kind that requires an entirely different frame of mind to tackle.

latexr 3 days ago | parent [-]

And one major issue is that LLMs are largely being sold and understood more like reliable algorithms than what they really are.

If everyone understood the distinction and their limitations, they wouldn’t be enjoying this level of hype, or leading to teen suicides and people giving themselves centuries-old psychiatric illnesses. If you “go out into the real world” you learn people do not understand LLMs aren’t deterministic and that they shouldn’t blindly accept their outputs.

https://archive.ph/rdL9W

https://archive.ph/20241023235325/https://www.nytimes.com/20...

https://archive.ph/20250808145022/https://www.404media.co/gu...

ACCount37 3 days ago | parent [-]

It's nothing new. LLMs are unreliable, but in the same ways humans are.

latexr 3 days ago | parent | next [-]

But LLMs output is not being treated the same as human output, and that comparison is both tired and harmful. People are routinely acting like “this is true because ChatGPT said so” while they wouldn’t do the same for any random human.

LLMs aren’t being sold as unreliable. On the contrary, they are being sold as the tool which will replace everyone and do a better job at a fraction of the piece.

ACCount37 3 days ago | parent [-]

That comparison is more useful than the alternatives. Anthropomorphic framing is one of the best framings we have for understanding what properties LLMs have.

"LLM is like an overconfident human" certainly beats both "LLM is like a computer program" and "LLM is like a machine god". It's not perfect, but it's the best fit at 2 words or less.

krupan 3 days ago | parent | prev [-]

Um, no. They are unreliable at a much faster pace and larger scale than any human. They are more confident while being unreliable than most humans (politicians and other bullshitters aside, most humans admit when they aren't sure about something).

latexr 3 days ago | parent | prev [-]

> you can get a pristine image of a different pile of dirt every single time you ask.

That’s what I was trying to convey with the “then reopen the image” bit. But I chose a different image of a different thing rather than a different image of a similar thing.

energy123 3 days ago | parent | prev [-]

An encyclopaedia also can't win a gold medals at the IMO and IOI. So yeah, they're not the same thing, even though the analogy is pretty good.

latexr 3 days ago | parent [-]

Of course they’re not the same thing, the goal of an analogy is not to be perfect but to provide a point of comparison to explain an idea.

My point is that I find the chosen term inadequate. The author made it up from combining two existing words, where one of them is a poor fit for what they’re aiming to convey.