Remix.run Logo
mlinsey a day ago

Certainly if a human wrote code that solved this problem, and a second human copied and tweaked it slightly for their use case, we would have a word for them.

Would we use the same word if two different humans wrote code that solved two different problems, but one part of each problem was somewhat analogous to a different aspect of a third human's problem, and the third human took inspiration from those parts of both solutions to create code that solved a third problem?

What if it were ten different humans writing ten different-but-related pieces of code, and an eleventh human piecing them together? What if it were 1,000 different humans?

I think "plagiarism", "inspiration", and just "learning from" fall on some continuous spectrum. There are clear differences when you zoom out, but they are in degree, and it's hard to set a hard boundary. The key is just to make sure we have laws and norms that provide sufficient incentive for new ideas to continue to be created.

nitwit005 a day ago | parent | next [-]

Ask for something like "a first person shooter using software rendering", and search github for the function names for the rendering functions. Using Copilot I found code simply lifted from implementations of Doom, except that "int" was replaced with "int32_t" and similar.

It's also fun to tell Copilot that the code will violate a license. It will seemingly always tell you it's fine. Safe legal advice.

martin-t a day ago | parent | next [-]

And this is just the stuff you notice.

1) Verbatin copy is first-order plagiarism.

2a) Second-order plagiarism of written text would be replacing words with synonyms. Or taking a book paragraph by paragraph and for each one of them, rephrasing it in your own words. Yes, it might fool automated checkers but the structure would still be a copy of the original book. And most importantly, it would not contain any new information. No new positive-sum work was done. It would have no additional value.

Before LLMs almost nobody did this because the chance that it would help in a lawsuit vs the amount of work was not a good tradeoff. Now it is. But LLMs can do "better":

2b) A different kind of second-order plagiarism is using multiple sources and plagiarizing each of them only in part. Find multiple books on the same topic, take 1 chapter from each and order them in a coherent manner. Make it more granular. Find paragraphs or phrases which fit into the structure of your new book but are verbatim from other books. See how granular you can make it.

The trick here is that doing this by hand is more work than just writing your own book. So nobody did it and copyright law does not really address this well. But with LLMs, it can be automated. You can literally instruct an LLM to do this and it will do it cheaper than any human could. However, how LLMs work internally is yet different:

n) Higher-order plagiarism is taking multiple source books, identifying patterns, and then reproducing them in your "new" book.

If the patterns are sufficiently complex, nobody will ever be able to prove what specifically you did. What previously took creative human work now became a mechanical transformation of input data.

The point is this ability to detect and reproduce patterns is an impressive innovation but it's built on top of the work of hundreds of millions[0] of humans whose work was used without consent. The work done by those employed by the LLM companies is minuscule compared to that. Yet all of the reward goes to them.

Not to mention LLMs completely defear the purpose of (A)GPL. If you can take AGPL code and pass it through a sufficiently complex mechanical transformation that the output does the same thing but copyright no longer applies, then free software is dead. No more freedom to inspect and modify.

[0]: Github alone has 100 million users ( https://expandedramblings.com/index.php/github-statistics/ ) and we have reason to believe all of their data was used in training.

jacquesm a day ago | parent | next [-]

If a human did 2a or 2b we would think that a larger infraction than (1) because it shows intent to obfuscate the origins.

As for your free software is dead argument: I think it is worse than that: it takes away the one payment that free software authors get: recognition. If a commercial entity can take the code, obfuscate it and pass it off as their own copyrighted work to then embrace and extend it then that is the worst possible outcome.

martin-t a day ago | parent [-]

> shows intent to obfuscate the origins

Good point. Reminds me of how if you poison one person, you go to prison, but when a company poisons thousands, it gets a fine... sometimes.

> it takes away the one payment that free software authors get: recognition

I keep flip-flopping on this. I did most of my open source work not caring about recognition but about the principles of GPL and later AGPL. However, I came to realize it was a mistake - people don't judge you by the work you actually do but by the work you appear to do. I have zero respect for people who do something just for the approval of others but I am aware of the necessity of making sure people know your value.

One thing is certain: credit/recognition affect all open source code, user rights (e.g. to inspect and modify) affect only the subset under (A)GPL.

Both are bad in their own right.

fc417fc802 15 hours ago | parent | prev [-]

You make several good points, and I appreciate that they appear well thought out.

> What previously took creative human work now became a mechanical transformation of input data.

At which point I find myself wondering if there's actually a problem. If it was previously permitted due to the presence of creative input, why should automating that process change the legal status? What justifies treating human output differently?

> then free software is dead. No more freedom to inspect and modify.

It seems to me that depends on the ideological framing. Consider a (still entirely hypothetical) world where anyone can receive approximately any software they wish with little more than a Q&A session with an expert AI agent. Rather than free software being dead, such a scenario would appear to obviate the vast majority of needs that free software sets out to serve in the first place.

It seems a bit like worrying that free access to a comprehensive public transportation service would kill off a ride sharing service. It probably would, and the end result would also probably be a net benefit to humanity.

jacquesm 10 hours ago | parent | next [-]

> At which point I find myself wondering if there's actually a problem. If it was previously permitted due to the presence of creative input, why should automating that process change the legal status? What justifies treating human output differently?

Copyright law... automated transformation preserves copyright. It makes the output a derivative of the input.

fc417fc802 5 hours ago | parent [-]

Yes that's what the law currently says. I'm asking if it ought to say that in this specific scenario.

Previously there was no way for a machine to do large swaths of things that have now recently become possible. Thus a law predicated on the assumption that a machine can't do certain things might need to be revisited.

martin-t 10 hours ago | parent | prev [-]

> What justifies treating human output differently?

Human time is inherently valuable, computer time is not.

One angle:

The real issue is how this is made possible. Imagine an AI being created by a lone genius or a team of really good programmers and researchers by sitting down and just writing the code. From today's POV, it would be almost unimaginably impressive but that is how most people envisioned AI being created a few decades ago (and maybe as far as 5 years ago). These people would obviously deserve all the credit for their invaluable work and all the income from people using their work. (At least until another team does the same, then it's competition as normal.)

But that's not how AI is being created. What the programmers and researchers really do it create a highly advanced lossy compression algorithm which then takes nearly all publicly available human knowledge (disregarding licenses/consent) and creates a model of it which can reproduce both the first-order data (duh) and the higher-order patterns in it (cool). Do they still deserve all the credit and all the income? What if there's 1k researchers and programmers working on the compression algorithm (= training algorithm) and 1B people whose work ("content") is compressed by it (= used to train it). I will freely admit that the work done to build the algorithm is higher skilled than most of the work done by the 1B people. Maybe even 10x or 100x more expensive. But if you multiply those numbers (1k * 100 vs 1B), you have to come to the conclusion that the 1B people deserve the vast majority of the credit and the vast majority of the income generated by the combined work. (And notice when another team creates a competing model based on the same data, the share by the 1B stays the same and the 1k have to compete for their fraction.)

Another angle:

If you read a book, learn something from it and then apply the knowledge to make money, you currently don't pay a share to the author of the book. But you paid a fixed price for the book, hopefully. We could design a system where books are available for free, we determine how much the book helped you make that money, and you pay a proportional share to the author. This is not as entirely crazy as it might sound. When you cause an injury to someone, a court will determine how much each party involved is liable and there are complex rules (e.g. https://en.wikipedia.org/wiki/Joint_and_several_liability) determining the subsequent exchange of money. We could in theory do the same for material you learn from (though the fractions would probably be smaller than 1%). We don't because it would be prohibitively time consuming, very invasive, and often unprovable unless you (accidentally) praise a specific blog post or say you learned a technique from a book. Instead, we use this thing called market capitalism where the author sets a price and people either buy the book or not (depending on whether they think it's worth it for them), some of them make no money as a result, some make a lot, and we (choose to) believe that in aggregate, the author is fairly compensated.

Even if your blog is available for anyone to read freely, you get compensated in alternative ways by people crediting you and/or by building an audience you can influence to a degree.

With LLMs, there is no way to get the companies training the models to credit you or build you an audience. And even if they pay for the books they use for training, I don't believe they pay enough. The price was determined before the possibility of LLM training was known to the author and the value produced by a sufficiently sophisticated AI, perhaps AGI (which they openly claim to want to create) is effectively unlimited. The only way to compensate authors fairly is to periodically evaluate how much revenue the model attracted and pay a dividend to the authors as long as that model continues to be used.

Best of all, unlike with humans, the inner workings of a computer model, even a very complex one, can be analyzed in their entirety. So it should be possible to track (fractional) attribution throughout the whole process. There's just no incentive for the companies to invest into the tooling.

---

> approximately any software they wish with little more than a Q&A session with an expert AI agent

Making software is not just about writing code, it's about making decisions. Not just understanding problem and designing a solution but also picking tradeoffs and preferences.

I don't think most people are gonna do this just like most people today don't go to a program's settings and tweak every slider/checkbox/dropdown to their liking. They will at most say they want something exactly like another program with a few changes. And then it's clearly based on that original program and all the work performed to find out the users' preferences/likes/dislikes/workflows which remain unchanged.

But even if they genuinely recreate everything, then if it's done by an LLM, it's still based on work of others as per the argument above.

---

> the end result would also probably be a net benefit to humanity.

Possibly. But in the case of software fully written by sufficiently advanced LLMs, that net benefit would be created only by using the work of a hundred million or possibly a billion of people for free and without (quite often against) their consent.

Forced work without compensation is normally called slavery. (The only difference is that our work has already been done and we're "only" forced to not be able to prevent LLM companies from using it despite using licenses which by their intent and by the logic above absolutely should.)

The real question is how to achieve this benefit without exploiting people.

And don't forget such a model will not be offered for free to everyone as a public good. Not even to those people whose data was used to train it. It will be offered as a paid service. And most of the revenue won't even go to the researchers and programmers who worked on the model directly and who made it possible. It will go to the people who contributed the least (often zero) technical work.

---

This comment (and its GP), which contains arguments I have not seen anywhere else, was written over an hour long train ride. I could have instead worked remotely to make more than enough money to pay for the train ride. Instead, I write this training data which will be compressed and some patterns from it reproduced, allowing people I will never know and who will never know me to make an amount of money I have no chance quantifying and get nothing from. Now, I have to work some other hour to pay for the train ride. Make of that what you will.

fc417fc802 4 hours ago | parent | next [-]

Human time is certainly valuable to a particular human. However, if I choose to spend time doing something that a machine can do people will not generally choose to compensate me more for it just because it was me doing it instead of a machine.

I think it's worth remembering that IP law is generally viewed (at least legally) as existing for the net benefit of society as opposed to for ethical reasons. Certainly many authors feel like they have (or ought to have) some moral right to control their work but I don't believe that was ever the foundation of IP law.

Nor do I think it should be! If we are to restrict people's actions (ex copying) then it should be for a clear and articulable net societal benefit. The value proposition of IP law is that it prevents degenerate behavior that would otherwise stifle innovation. My question is thus, how do these AI developments fit into that?

So I completely agree that (for example) laundering a full work more or less verbatim through an AI should not be permissible. But when it comes to the higher order transformations and remixes that resemble genuine human work I'm no longer certain. I definitely don't think that "human exceptionalism" makes for a good basis either legally or ethically.

Regarding FOSS licenses, I'm again asking how AI relates back to the original motivations. Why does FOSS exist in the first place? What is it trying to accomplish? A couple ideological motivations that come to mind are preventing someone building on top and then profiting, or ensuring user freedom and ability to tinker.

Yes, the current crop of AI tools seem to pose an ideological issue. However! That's only because the current iteration can't truly innovate and also (as you note) the process still requires lots of painstaking human input. That's a far cry from the hypothetical that I previously posed.

jacquesm 10 hours ago | parent | prev [-]

One of your remarks regarding attribution and compensation goes back to 'Xanadu' by the way, if you are not familiar with it that might be worth reading up on (Ted Nelson). He obviously did this well before the current AI age but a lot of the ideas apply.

A meta-comment:

I absolutely love your attention to detail in this discussion and avoiding taking 'the easy way out' from some of the more hairy concept embedded. This is exactly the kind of interaction that I love HN for, and it is interesting how this thread seems to bring out the best in you at the same time that it seems to bring out the worst in others.

Most likely they are responding as strongly as they do because they've bought into this matter to a degree that they are passing off works that they did not create as their own novel output, they got paid for it and they - like a religious person - are now so invested in this that it became their crutch and a part of their identity.

If you have another train ride to make I'd love for you to pick apart that argument and to refute it.

fransje26 16 hours ago | parent | prev [-]

> It's also fun to tell Copilot that the code will violate a license. It will seemingly always tell you it's fine. Safe legal advice.

Perfectly embodies the AI "startup" mentality. Nice.. /s

whatshisface a day ago | parent | prev | next [-]

They key difference between plagarism and building on someone's work is whether you say, "this based on code by linsey at github.com/socialnorms" or "here, let me write that for you."

CognitiveLens a day ago | parent | next [-]

but as mlinsey suggests, what if it's influenced in small, indirect ways by 1000 different people, kind of like the way every 'original' idea from trained professionals is? There's a spectrum, and it's inaccurate to claim that Claude's responses are comparable to adapting one individual's work for another use case - that's not how LLMs operate on open-ended tasks, although they can be instructed to do that and produce reasonable-looking output.

Programmers are not expected to add an addendum to every file listing all the books, articles, and conversations they've had that have influenced the particular code solution. LLMs are trained on far more sources that influence their code suggestions, but it seems like we actually want a higher standard of attribution because they (arguably) are incapable of original thought.

saalweachter a day ago | parent | next [-]

It's not uncommon, in a well-written code base, to see documentation on different functions or algorithms with where they came from.

This isn't just giving credit; it's valuable documentation.

If you're later looking at this function and find a bug or want to modify it, the original source might not have the bug, might have already fixed it, or might have additional functionality that is useful when you copy it to a third location that wasn't necessary in the first copy.

jacquesm 16 hours ago | parent [-]

This is why I'm still, even after decades of seeing it fail in the marketplace, a fan of literate programming.

sarchertech a day ago | parent | prev [-]

If the problem you ask it to solve has only one or a few examples, or if there are many cases of people copy pasting the solution, LLMs can and will produce code that would be called plagiarism if a human did it.

inquirerGeneral a day ago | parent [-]

[dead]

ineedasername a day ago | parent | prev [-]

Do you have a source for that being the key difference? Where did you learn your words, I don’t see the names of your teachers cited here. The English language has existed a while, why aren’t you giving a citation every time you use a word that already exists in a lexicon somewhere? We have a name for people who don’t coin their own words for everything and rip off the words that other painstakingly evolved over a millennia of history. Find your own graphemes.

latexr a day ago | parent [-]

What a profoundly bad faith argument. We all understand that singular words are public domain, they belong to everyone. Yet when you arrange them in a specific pattern, of which there are infinite possibilities, you create something unique. When someone copies that arrangement wholesale and claims they were the first, that’s what we refer to as plagiarism.

https://www.youtube.com/watch?v=K9huNI5sBd8

ineedasername 20 hours ago | parent | next [-]

It’s not bad faith argument. It’s an attempt to shake thinking that is profoundly stuck by taking that thinking to an absurd extreme. Until that’s done, quite a few people aren’t able to see past the assumptions they don’t know they making. And by quite a few people I mean everyone, at different times. A strong appreciation for the absurd will keep a person’s thinking much sharper.

stOneskull 13 hours ago | parent [-]

>> They key difference between plagarism and building on someone's work is whether you say, "this based on code by linsey at github.com/socialnorms" or "here, let me write that for you."

> [i want to] shake thinking that is profoundly stuck [because they] aren’t able to see past the assumptions they don’t know they making

what is profoundly stuck, and what are the assumptions?

macinjosh 11 hours ago | parent [-]

That your brain training on all the inputs it sees and creating output is fundamentally more legitimate than a computer doing the same thing.

Arelius 8 hours ago | parent [-]

Copyright isn't some axiom, but to quote wikipedia: "Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized."

It's a tool to incentivse human creative expression.

Thus it's entirely sensible to consider and treat the output from computers and humans differently.

Especially when you consider large differences between computers and humans, such as how trivial it is to create perfect duplicates of computer training.

tscherno 17 hours ago | parent | prev | next [-]

It is possible that the concept of intellectual property could be classified as a mistake of our era by the history teachers of future generations.

latexr 16 hours ago | parent [-]

Intellectual property is a legal concept; plagiarism is ethical. We’re discussing the latter.

jacquesm a day ago | parent | prev [-]

This particular user does that all the time. It's really tiresome.

ineedasername 20 hours ago | parent [-]

It’s tiresome to see unexamined assumptions and self-contradictions tossed out by a community that can and often does do much better. Some light absurdism often goes further and makes clear that I’m not just trying to setup a strawman since I’ve already gone and made a parody of my own point.

jacquesm 17 hours ago | parent | prev | next [-]

> we have laws and norms that provide sufficient incentive for new ideas to continue to be created

Indeed, and up until the advent of 'AI' we did. But that incentive is being killed right now and I don't see any viable replacement on the horizon.

nextos a day ago | parent | prev | next [-]

In case of LLMs, due to RAG, very often it's not just learning but almost direct real-time plagiarism from concrete sources.

doix a day ago | parent | next [-]

Isn't RAG used for your code rather than other people's code? If I ask it to implement some algorithm, I'd be very surprised if RAG was involved.

sholain a day ago | parent | prev [-]

RAG and LLMs are not the same thing, but 'Agents' incorporate both.

Maybe we could resolve the bit of a conundrum by the op in requiring 'agents' to give credit for things if they did rag them or pull them off the web?

It still doesn't resolve the 'inherent learning' problem.

It's reasonable to suggest that if 'one person did it, we should give credit' - at least in some cases, and also reasonable that if 1K people have done similar things ad the AI learns from that, well, I don't think credit is something that should apply.

But a couple of considerations:

- It may not be that common for an LLM to 'see one thing one time' and then have such an accurate assessment of the solution. It helps, but LLMs tend not to 'learn' things that way.

- Some people might consider this the OSS dream - any code that's public is public and it's in the public domain. We don't need to 'give credit' to someone because they solved something relatively arbitrary - or - if they are concerned with that, then we can have a separate mechanism for that, aka they can put it on Github or Wikipedia even, and then we can worry about 'who thought of it first' as a separate consideration. But in terms of Engineering application, that would be a bit of a detractor.

martin-t a day ago | parent [-]

> if 1K people have done similar things ad the AI learns from that, well, I don't think credit is something that should apply.

I think it should.

Sure, if you make a small amount of money and divide it among the 1000 people who deserve credit due to their work being used to create ("train") the model, it might be too small to bother.

But if actual AGI is achieved, then it has nearly infinite value. If said AGI is built on top of the work of the 1000 people, then almost infinity divided by 1000 is still a lot of money.

Of course, the real numbers are way larger, LLMs were trained on the work of at least 100M but perhaps over a billion of people. But the value they provide over a long enough timespan is also claimed to be astronomical (evidenced by the valuations of those companies). It's not just their employees who deserve a cut but everyone whose work was used to train them.

> Some people might consider this the OSS dream

I see the opposite. Code that was public but protected by copyleft can now be reused in private/proprietary software. All you need to do it push it through enough matmuls and some nonlinearities.

sholain 13 hours ago | parent [-]

- I don't think it's even reasonable to suggest that 1000 people all coming up with variations of some arbitrary bit of code either deserve credit - or certainly 'financial remuneration' because they wrote some arbitrary piece of code.

That scenario is already today very well accepted legally and morally etc as public domain.

- Copyleft is not OSS, it's a tiny variation of it, which is both highly ideological and impractical. Less than 2% of OSS projects are copyleft. It's a legit perspective obviously, but it hasn't bee representative for 20 years.

Whatever we do with AI, we already have a basic understanding of public domain, at least we can start from there.

grayhatter 9 hours ago | parent | prev | next [-]

> What if it were ten different humans writing ten different-but-related pieces of code, and an eleventh human piecing them together? What if it were 1,000 different humans?

What if it was just a single person? I take it you didn't read any of the code in the ocaml vibe pr that was posted a bit ago? The one where Claude copied non just implementation specifics, but even the copyright headers from a named, specific person.

It's clear that you can have no idea if the magic black box is copying from a single source, or from many.

So your comment boils down to; plagiarism is fine as long as I don't have to think about it. Are you really arguing that's ok?

jacquesm 7 hours ago | parent [-]

> So your comment boils down to; plagiarism is fine as long as I don't have to think about it.

It is actually worse: plagiarism is fine if I'm shielded from such claims by using a digital mixer. When criminals use crypto tumblers to hide their involvement we tend to see that as proof of intent, not as absolution.

LLMs are copyright tumblers.

https://en.wikipedia.org/wiki/Cryptocurrency_tumbler

geniium 21 hours ago | parent | prev [-]

Thanks for writing this - love the way u explain the pov. I wish people would consider this angle more