| ▲ | halxc 3 hours ago |
| We all saw verbatim copies in the early LLMs. They "fixed" it by implementing filters that trigger rewrites on blatant copyright infringement. It is a research topic for heaven's sake: https://arxiv.org/abs/2504.16046 |
|
| ▲ | RyanCavanaugh 3 hours ago | parent | next [-] |
| The internet is hundreds of billions of terabytes; a frontier model is maybe half a terabyte. While they are certainly capable of doing some verbatim recitations, this isn't just a matter of teasing out the compressed C compiler written in Rust that's already on the internet (where?) and stored inside the model. |
| |
| ▲ | philipportner an hour ago | parent | next [-] | | This seems related, it may not be a codebase but they are able to extract "near" verbatim books out of Claude Sonnet. https://arxiv.org/pdf/2601.02671 > For Claude 3.7 Sonnet, we were able to extract four whole books near-verbatim, including two books under copyright in the U.S.: Harry Potter and the Sorcerer’s Stone and 1984 (Section 4). | | |
| ▲ | Aurornis 2 minutes ago | parent [-] | | Their technique really stretched the definition of extracting text from the LLM. They used a lot of different techniques to prompt with actual text from the book, then asked the LLM to continue the sentences. I only skimmed the paper but it looks like there was a lot of iteration and repetitive trials. If the LLM successfully guessed words that followed their seed, they counted that as "extraction". They had to put in a lot of the actual text to get any words back out, though. The LLM was following the style and clues in the text. You can't literally get an LLM to give you books verbatim. These techniques always involve a lot of prompting and continuation games. |
| |
| ▲ | seba_dos1 15 minutes ago | parent | prev | next [-] | | > The internet is hundreds of billions of terabytes; a frontier model is maybe half a terabyte. The lesson here is that the Internet compresses pretty well. | |
| ▲ | mft_ 33 minutes ago | parent | prev [-] | | (I'm not needlessly nitpicking, as I think it matters for this discussion) A frontier model (e.g. latest Gemini, Gpt) is likely several-to-many times larger than 500GB. Even Deepseek v3 was around 700GB. But your overall point still stands, regardless. |
|
|
| ▲ | Aurornis 9 minutes ago | parent | prev | next [-] |
| Simple logic will demonstrate that you can't fit every document in the training set into the parameters of an LLM. Citing a random arXiv paper from 2025 doesn't mean "they" used this technique. It was someone's paper that they uploaded to arXiv, which anyone can do. |
|
| ▲ | ben_w 2 hours ago | parent | prev | next [-] |
| We saw partial copies of large or rare documents, and full copies of smaller widely-reproduced documents, not full copies of everything. An e.g. 1 trillion parameter model is not a lossless copy of a ten-petabyte slice of plain text from the internet. The distinction may not have mattered for copyright laws if things had gone down differently, but the gap between "blurry JPEG of the internet" and "learned stuff" is more obviously important when it comes to e.g. "can it make a working compiler?" |
| |
| ▲ | tza54j 2 hours ago | parent | next [-] | | We are here in a clean room implementation thread, and verbatim copies of entire works are irrelevant to that topic. It is enough to have read even parts of a work for something to be considered a derivative. I would also argue that language models who need gargantuan amounts of training material in order to work by definition can only output derivative works. It does not help that certain people in this thread (not you) edit their comments to backpedal and make the followup comments look illogical, but that is in line with their sleazy post-LLM behavior. | | |
| ▲ | ben_w an hour ago | parent [-] | | > It is enough to have read even parts of a work for something to be considered a derivative. For IP rights, I'll buy that. Not as important when the question is capabilities. > I would also argue that language models who need gargantuan amounts of training material in order to work by definition can only output derivative works. For similar reasons, I'm not going to argue against anyone saying that all machine learning today, doesn't count as "intelligent": It is perfectly reasonable to define "intelligence" to be the inverse of how many examples are needed. ML partially makes up for being (by this definition) thick as an algal bloom, by being stupid so fast it actually can read the whole internet. |
| |
| ▲ | antirez 2 hours ago | parent | prev | next [-] | | Besides, the fact an LLM may recall parts of certain documents, like I can recall incipits of certain novels, does not mean that when you ask LLM of doing other kind of work, that is not recalling stuff, the LLM will mix such things verbatim. The LLM knows what it is doing in a variety of contexts, and uses the knowledge to produce stuff. The fact that for many people LLMs being able to do things that replace humans is bitter does not mean (and is not true) that this happens mainly using memorization. What coding agents can do today have zero explanation with memorization of verbatim stuff. So it's not a matter of copyright. Certain folks are fighting the wrong battle. | | |
| ▲ | shakna 36 minutes ago | parent [-] | | During a "clean room" implementation, the implementor is generally selected for not being familiar with the workings of what they're implementing, and banned from researching using it. Because it _has_ been enough, that if you can recall things, that your implementation ends up not being "clean room", and trashed by the lawyers who get involved. I mean... It's in the name. > The term implies that the design team works in an environment that is "clean" or demonstrably uncontaminated by any knowledge of the proprietary techniques used by the competitor. If it can recall... Then it is not a clean room implementation. Fin. |
| |
| ▲ | philipportner an hour ago | parent | prev | next [-] | | Granted, these are some of the most widely spread texts, but just fyi: https://arxiv.org/pdf/2601.02671 > For Claude 3.7 Sonnet, we were able to extract four whole books near-verbatim, including two books under copyright in the U.S.: Harry Potter and the Sorcerer’s Stone and 1984 (Section 4). | | |
| ▲ | ben_w an hour ago | parent [-] | | Already aware of that work, that's why I phrased it the way I did :) Edit: actually, no, I take that back, that's just very similar to some other research I was familiar with. |
| |
| ▲ | boroboro4 2 hours ago | parent | prev [-] | | While I mostly agree with you, it worth noting modern llms are trained on 10-20-30T of tokens which is quite comparable to their size (especially given how compressible the data is) |
|
|
| ▲ | soulofmischief 2 hours ago | parent | prev [-] |
| The point is that it's a probabilistic knowledge manifold, not a database. |
| |