Remix.run Logo
zozbot234 7 hours ago

This seems right to me. If you ask a LLM to derive a spec that has no expressive element of the original code (a clean-room human team can carefully verify this), and then ask another instance of the LLM (with fresh context) to write out code from the spec, how is that different from a "clean room" rewrite? The agent that writes the new code only ever sees the spec, and by assumption (the assumption that's made in all clean room rewrites) the spec is purely factual with all copyrightable expression having been distilled out. But the "deriving the spec (and verifying that it's as clean as possible)" is crucial and cannot be skipped!

sigseg1v 7 hours ago | parent | next [-]

How would a team verify this for any current model? They would have to observe and control all training data. In practice, any currently available model that is good enough to perform this task likely fails the clean room criteria due to having a copy of the source code of the project it wants to rewrite. At that point it's basically an expensive lossy copy paste.

zozbot234 7 hours ago | parent [-]

You can always verify the output. Unless the problem being solved really is exceedingly specific and non-trivial, it's at least unlikely that the AI will rip off recognizable expression from the original work. The work may be part of the training but so are many millions of completely unrelated works, so any "family resemblance" would have to be there for very specific reasons about what's being implemented.

oytis 7 hours ago | parent | prev | next [-]

It requires the original project to not be in the training data for the model for it to be a clean room rewrite

zozbot234 7 hours ago | parent [-]

That only matters if expression of the original project really does end up in the rewrite, doesn't it? This can be checked for (by the team with access to the code) and it's also quite unlikely at least. It's not trivial at all to have an LLM replicate their training verbatim: even when feasible (the Harry Potter case, a work that's going to be massively overweighted in training due to its popularity) it takes very specific prompting and hinting.

oytis 7 hours ago | parent | next [-]

> That only matters if expression of the original project really does end up in the rewrite, doesn't it?

No, I don't think so. I hate comparing LLMs with humans, but for a human being familiar with the original code might disqualify them from writing a differently-licensed version.

Anyway, LLMs are not human, so as many courts confirmed, their output is not copyrightable at all, under any license.

toyg 7 hours ago | parent [-]

Uh, this is just a curiosity, but do you have a reference for that last argument?

If true, it would mean most commercial code being developed today, since it's increasingly AI-generated, would actually be copyright-free. I don't think most Western courts would uphold that position.

duskdozer 6 hours ago | parent [-]

https://news.ycombinator.com/item?id=47232289

pseudalopex 4 hours ago | parent [-]

The headline was misleading. The courts avoided to decide what Thaler could have copyrighted because he said he was not the author.

vkou 6 hours ago | parent | prev [-]

> That only matters if expression of the original project really does end up in the rewrite, doesn't it?

If that were the case, nobody would bother with clean-room rewrites.

nneonneo 6 hours ago | parent | prev [-]

Somewhat annoyingly, there's been research that suggests that models can pass information to each other via (effectively) steganographic techniques - specific but apparently harmless choices of tokens, wordings, and so on; see https://arxiv.org/abs/1712.02950 and https://alignment.anthropic.com/2025/subliminal-learning/ for some simple examples.

While it feels unlikely that a simple "write this spec from this code" + "write this code from this spec" loop would actually trigger this kind of hiding behaviour, an LLM trained to accurately reproduce code from such a loop definitely would be capable of hiding code details within the spec - and you can't reasonably prove that the frontier LLMs have not been trained to do so.