| ▲ | iepathos 2 hours ago | |||||||||||||||||||||||||
This is essentially 'License Laundering as a Service.' The 'Firewall' they describe is an illusion because the contamination happens at the training phase, not the inference phase. You can't claim independent creation when your 'independent developer' (the commercial LLM) already has the original implementation's patterns and edge cases baked into its weights. In order to really do this, they would need to train LLMs from scratch that had no exposure whatsoever to open source code which they may be asked to reproduce. Those models in turn would be terrible at coding given how much of the training corpus is open source code. | ||||||||||||||||||||||||||
| ▲ | john_strinlai 2 hours ago | parent | next [-] | |||||||||||||||||||||||||
>The 'Firewall' they describe is an illusion because [...] it is an illusion because this is a satire site. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | gwern 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
The solution here seems to be to impose some constraint or requirement which means that literal copying is impossible (remember, copyright governs copies, it doesn't govern ideas or algorithms - that would be 'patents', which essentially no open source software has) or where any 'copying' from vaguely remembered pretraining code is on such an abstract indirect level that it is 'transformative' and thus safe. For example, the Anthropic Rust C compiler could hardly have copied GCC or any of the many C compilers it surely trained on, because then it wouldn't have spat out reasonably idiomatic and natural looking Rust in a differently organized codebase. Good news for Rust and Lean, I guess, as it seems like everyone these days is looking for an excuse to rewrite everything into those for either speed or safety or both. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | briandw an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||
Obviously satire, but it will clearly be what happens in the future (predicting here, I'm not endorsing this practice). We can scratch train a new LLM on code generated from "contaminated" LLMs. We can then audit all the training data used and demonstrate that the original source wasn't in the training data. Therefore the cleanroom implementation holds. Current LLM training is relying less and less on human generated code. Just look at the open source models from China. They rely heavily on distilling from other models. One additional point. Exposure to the original source isn't enough to show infringement. Linus looked at UNIX source before writing linux. | ||||||||||||||||||||||||||
| ▲ | neilv an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||
I think this site is either satire, or serious but with a certain kind of humor in which both they and the reader know they're lying (but it's in everyone's interest to play along). They do say this: > Is this legal? / our clean room process is based on well-established legal precedent. The robots performing reconstruction have provably never accessed the original source code. We maintain detailed audit logs that definitely exist and are available upon request to courts in select jurisdictions. Unless they're rejecting almost all of open source packages submitted by the customer, due to those packages being in the training set of the foundation model that they use, this is really the opposite of cleanroom. | ||||||||||||||||||||||||||
| ▲ | littlestymaar an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||
This is definitely a parody though, not a real service. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | ActivePattern 2 hours ago | parent | prev [-] | |||||||||||||||||||||||||
[flagged] | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||