| ▲ | karel-3d 9 days ago |
| I... don't understand how AI related to video codecs. Maybe because I don't understand either video codecs or AI on a deeper level. |
|
| ▲ | tdullien 9 days ago | parent | next [-] |
| Every predictor is a compressor, every compressor is a predictor. If you're interested in this, it's a good idea reading about the Hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) and going from there. In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress. The flip side of this is that all fields of compression have a lot to gain from progress in AI. |
| |
|
| ▲ | jl6 9 days ago | parent | prev | next [-] |
| It has long been recognised that the state of the art in data compression has much in common with the state of the art in AI, for example: http://prize.hutter1.net/ https://bellard.org/nncp/ |
| |
| ▲ | ddtaylor 9 days ago | parent [-] | | Some view these as so interconnected that they will say LLMs are "just" compression. | | |
| ▲ | pjc50 8 days ago | parent [-] | | Which is an interesting view when applied to the IP. I think it's relatively uncontroversial that an MP4 file which "predicts" a Disney movie which it was "trained on" is a derived work. Suppose you have an LLM which was trained on a fairly small set of movies and you could produce any one on demand; would that be treated as a derived work? If you have a predictor/compressor LLM which was trained on all the movies in the world, would that not also be infringement? | | |
| ▲ | mr_toad 8 days ago | parent [-] | | MP4s are compressed data, not a compression algorithm. An MP4 (or any compressed data) is not a “prediction”, it is the difference between what was predicted and what you’re trying to compress. An LLM is (or can be used) as a compression algorithm, but it is not compressed data. It is possible to have an overfit algorithm exactly predict (or reproduce) an output, but it’s not possible for one to reproduce all the outputs due to the pigeonhole principle. To reiterate - LLMs are not compressed data. |
|
|
|
|
| ▲ | bjoli 9 days ago | parent | prev | next [-] |
| It is like upscaling. If you could train AI to "upscale" your audio or video you could get away with sending a lot less data. It is already being done with quite amazing results for audio. |
|
| ▲ | Retr0id 9 days ago | parent | prev | next [-] |
| AI and data compression are the same problem, rephrased. |
| |
| ▲ | oblio 9 days ago | parent [-] | | Which makes Silicon Valley, the TV show, even funnier. | | |
| ▲ | chisleu 8 days ago | parent [-] | | holy shit it does. The scene with him inventing the new compression algorithm basically foreshadowed the gooning to follow local LLM availability. |
|
|
|
| ▲ | 8 days ago | parent | prev [-] |
| [deleted] |