| ▲ | martin-t 6 hours ago | |||||||
This ruling is IMO/IANAL based on lawyers and judges not understanding how LLMs work internally, falling for the marketing campaign calling them "AI" and not understanding the full implications. LLM-creation ("training") involves detecting/compressing patterns of the input. Inference generates statistically probable based on similarities of patterns to those found in the "training" input. Computers don't learn or have ideas, they always operate on representations, it's nothing more than any other mechanical transformation. It should not erase copyright any more than synonym substitution. | ||||||||
| ▲ | supern0va 5 hours ago | parent | next [-] | |||||||
>LLM-creation ("training") involves detecting/compressing patterns of the input. There's a pretty compelling argument that this is essentially what we do, and that what we think of as creativity is just copying, transforming, and combining ideas. LLMs are interesting because that compression forces distilling the world down into its constituent parts and learning about the relationships between ideas. While it's absolutely possible (or even likely for certain prompts) that models can regurgitate text very similar to their inputs, that is not usually what seems to be happening. They actually appear to be little remix engines that can fit the pieces together to solve the thing you're asking for, and we do have some evidence that the models are able to accomplish things that are not represented in their training sets. Kirby Ferguson's video on this is pretty great: https://www.youtube.com/watch?v=X9RYuvPCQUA | ||||||||
| ||||||||
| ▲ | timmmmmmay 6 hours ago | parent | prev [-] | |||||||
fortunately, you aren't only operating on representations, right? lemme check my Schopenhauer right quick... | ||||||||