Remix.run Logo
apwell23 8 hours ago

whats an example of loss?

aewens 5 hours ago | parent | next [-]

Lossy compression vs lossless compression is the difference of whether you can get a 1:1 copy of the original data if you compress and then decompress it.

A simple example of this is if you have 4 bits of data and have a compression algorithm that turns it into 2 bits of data. If your dataset only contains 0000, 0011, 1100, and 1111; then this can technically be considered lossless compression because we can always reconstruct the exact original data (e.g. 0011 compresses to 01 and can decompress back to 0011, 1100 compresses to 10 and can decompress back to 1100, etc). However, if our dataset later included 1101 and got compressed to 10, this is now “lossy” because it would decompress to 1100, that last bit was “lost”.

An LLM is lossy compression because it lacks the capacity to 1:1 replicate all its input data 100% of the time. It can get quite close in some cases, sure, but it is not perfect every time. So it is considered “lossy”.

fxj 5 hours ago | parent | prev [-]

How good can you recreate an image that is described by words? Obviously not bit by bit and pixel by pixel. You get something that resembles the original but not an exact copy.

apwell23 4 hours ago | parent [-]

you can create original exactly with right prompt

sfink 2 hours ago | parent [-]

Yes. For example, you could always say "give me a jpeg image file that is encoded as the bytes 255, 216, 255, 224, 0, 16, 74, ...". But that's just pointing out that the input to your "LLM" function includes the prompt. It's f(model, prompt) = response.

It's not straightforward to prove that models have to be lossy. Sure, the training data is much larger than the model, but there is a huge amount of redundancy in the training data. You have to compare a hypothetically optimal compression of the training data to the size of the model to prove that it must be lossy. And yet, it's intuitively obvious that even the best lossless compression (measured in Kolmogorov complexity) of the training data is going to be vastly larger than the biggest models we have today.

You can always construct toy examples where this isn't the case. For example, you could just store all of the training data in your model, and train another part of the model to read it out. But that's not an LLM anymore. Similarly, you could make an LLM out of synthetic redundant data and it could achieve perfect recall. (Unless you're clever with how you generate it, though, any off the shelf compression algorithm is likely to produce something much much smaller.)