Remix.run Logo
Kon-Peki 8 hours ago

Violations of 17 USC 1202 can be punished pretty severely. It's not about just money, either.

If, during the trial, the judge thinks that OpenAI is going to be found to be in violation, he can order all of OpenAIs computer equipment be impounded. If OpenAI is found to be in violation, he can then order permanent destruction of the models and OpenAI would have to start over from scratch in a manner that doesn't violate the law.

Whether you call that "core" or not, OpenAI cannot afford to lose these parts that are left of this lawsuit.

zozbot234 8 hours ago | parent | next [-]

> he can order all of OpenAIs computer equipment be impounded.

Arrrrr matey, this is going to be fun.

Kon-Peki 8 hours ago | parent | next [-]

People have been complaining about the DMCA for 2+ decades now. I guess it's great if you are on the winning side. But boy does it suck to be on the losing side.

immibis 7 hours ago | parent [-]

And normal people can't get on the winning side. I'm trying to get Github to DMCA my own repositories, since it blocked my account and therefore I decided it no longer has the right to host them. Same with Stack Exchange.

GitHub's ignored me so far, and Stack Exchange explicitly said no (then I sent them an even broader legal request under GDPR)

ralph84 6 hours ago | parent [-]

When you uploaded your code to GitHub you granted them a license to host it. You can’t use DMCA against someone who’s operating within the parameters of the license you granted them.

tremon 5 hours ago | parent [-]

Their stance is that GitHub revoked that license by blocking their account.

immibis 7 hours ago | parent | prev [-]

It won't happen. Judges only order that punishment for the little guys.

nickpsecurity 8 hours ago | parent | prev | next [-]

“ If OpenAI is found to be in violation, he can then order permanent destruction of the models and OpenAI would have to start over from scratch in a manner that doesn't violate the law.”

That is exactly why I suggested companies train some models on public domain and licensed data. That risk disappears or is very minimal. They could also be used for code and synthetic data generation without legal issues on the outputs.

jsheard 7 hours ago | parent | next [-]

That's what Adobe and Getty Images are doing with their image generation models, both are exclusively using their own licensed stock image libraries so they (and their users) are on pretty safe ground.

nickpsecurity 6 hours ago | parent [-]

That’s good. I hope more do. This list has those doing it under the Fairly Trained banner:

https://www.fairlytrained.org/certified-models

3pt14159 7 hours ago | parent | prev [-]

The problem is that you don't get the same quality of data if you go about it that way. I love ChatGPT and I understand that we're figuring out this new media landscape but I really hope it doesn't turn out to neuter the models. The models are really well done.

nickpsecurity 6 hours ago | parent | next [-]

If I steal money, I can get way more done than I do now by earning it legally. Yet, you won’t see me regularly dismissing legitimate jobs by posting comparisons to what my numbers would look like if stealing I.P..

We must start with moral and legal behavior. Within that, we look at what opportunities we have. Then, we pick the best ones. Those we can’t have are a side effect of the tradeoffs we’ve made (or tolerated) in our system.

tremon 5 hours ago | parent | prev [-]

That is OpenAI's problem, not their victims'.

sieabahlpark 8 hours ago | parent | prev [-]

[dead]