Remix.run Logo
miki123211 8 hours ago

This vindicates the pro-AI censorship crowd I guess.

It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.

culi 7 hours ago | parent | next [-]

It's not really different from how we treat any other platform that can host CSAM. I guess the main difference is that it's being "made" instead of simply "distributed" here

toxik 6 hours ago | parent [-]

Uh, let's distinguish between generated images, however revolting, and actual child sexual abuse.

The main problem with the image generators is that they are used to harass and smear people (and children...) Those were always illegal to do.

krig 6 hours ago | parent | next [-]

Those images are generated from a training set, and it is already well known and reported that those training sets contain _real_ CSAM, real violence, real abuse. That "generated" face of a child is based on real images of real children.

pavlov 5 hours ago | parent [-]

Indeed, a Stanford study from a few years back showed that the image data sets used by essentially everybody contain CSAM.

Everybody else has teams building guardrails to mitigate this fundamental existential horror of these models. Musk fired all the safety people and decided to go all in on “adult” content.

5 hours ago | parent | prev | next [-]
[deleted]
KaiserPro 4 hours ago | parent | prev | next [-]

> let's distinguish between generated images, however revolting, and actual child sexual abuse.

Can't because even before GenAI the "oh its generated in photoshop" or "they just look young" excuse was used successfully to allow a lot of people to walk free. the law was tightend in the early 2000s for precisely this reason

hdgvhicv 6 hours ago | parent | prev | next [-]

> Uh, let's distinguish between generated images, however revolting, and actual child sexual abuse.

If you want. In many countries the law doesn’t. If you don’t like the law your billion dollar company still has to follow it. At least in theory.

blipvert 4 hours ago | parent | prev [-]

Pro-tip: if you are actively assisting someone in doing illegal things then you are an accomplice.

mnewme 5 hours ago | parent | prev | next [-]

This is the wrong take.

Yes they could have an uncensored model, but then they would need proper moderation and delete this kind of content instantly or ban users that produce it. Or don’t allow it in the first place.

It doesn’t matter how CSAM is produced, the only thing that matters is that is on the platform.

I am flabbergasted people even defend this

direwolf20 2 hours ago | parent [-]

It matters whether they attempt to block it or encourage it. Musk encouraged it, until legal pressure hit, then moved it behind a paywall so it's harder to see evidence.

mnewme 2 hours ago | parent [-]

Exactly!

madeofpalk 5 hours ago | parent | prev | next [-]

Let’s take a step back and remove AI generation from the conversation for a moment.

Did X do enough to prevent its website being used to distribute illegal content - consensual sexual material of both adults and children?

Now reintroduce AI generation, where X plays a more active role in facilitating the creation of that illegal content.

muyuu 4 hours ago | parent [-]

"Enough" can always be pushed into the impossible. That's why laws and regulations need to be more concrete than that.

There's essentially a push to end the remnants of the free speech Internet by making the medium responsible for the speech of its participants. Let's not pretend otherwise.

KaiserPro 4 hours ago | parent [-]

The law is concrete on this.

In the UK, you must take "reasonable" steps to remove illegal content.

This normally means some basic detection (ie fingerprinting which is widely used from a collaborative database) or if a user is consistently uploading said stuff, banning them.

Allowing a service that you run to continue to generate said illegal content, even after you publicly admit that you know its wrong, is not reasonable.

muyuu 4 hours ago | parent [-]

that doesn't sound concrete to me, at all

direwolf20 2 hours ago | parent | next [-]

No law is concrete. Murder is killing with intent to kill. What concrete test shows if someone intended to kill? They say you have intent to kill if a reasonable person would expect the actions you took would result in killing.

KaiserPro 2 hours ago | parent | prev | next [-]

Nothing in common law is "concrete", thats kinda the point of it.

Judges can evolve and interpret as they see fit, and this evolution is case law.

This is why in the US the supreme court can effectively change the law by issuing a binding ruling. (see 2nd amendment meaning no gun laws, rather than as written, or the recent racial profiling issues)

disgruntledphd2 3 hours ago | parent | prev [-]

It's about as concrete as one gets in the UK/US/Anglosphere law tradition.

muyuu 3 hours ago | parent [-]

if you can be sued for billions because some overbearing body, with a very different ideology to yours, can deem your moderation/censorship rules to be "unreasonable" then what you do is err on the side of caution and allow nearly nothing

this is not compatible with that line of business - perhaps one of the reasons nothing is done in Europe these days

direwolf20 2 hours ago | parent | next [-]

They advertised you could use the tool to undress people, that's pretty clearly on the unreasonable side of the line

KaiserPro 3 hours ago | parent | prev [-]

sigh

The vast majority of the EU is not common law, so "reasonable" in this instance is different.

What you describe already happens in the USA, that why MLB has that weird local TV blackout, why bad actors use copyright to take down content they don't like.

The reason why its so easy to do that is because companies must reasonably comply with copyright holder's requests.

Its the same with CSAM, distributing it doesn't have first amendment protection, knowingly distributing it is illegal. All reasonable steps should be taken to detect and remove CSAM from your systems to qualify for safe harbour.

muyuu 2 hours ago | parent [-]

sigh indeed

> Its the same with CSAM, distributing it doesn't have first amendment protection, knowingly distributing it is illegal. All reasonable steps should be taken to detect and remove CSAM from your systems to qualify for safe harbour.

nice try, but nobody is distributing or hosting CSAM in the current conversation

people trying to trick a bot to post bikini pictures of preteens and blaming the platform for it is a ridiculous stretch to the concept of hosting CSAM, which really is a transparent attack to a perceived political opponent to push for a completely different model of the internet to the pre-existing one, a transition that is as obvious as is already advanced in Europe and most of the so-called Anglosphere

> The vast majority of the EU is not common law, so "reasonable" in this instance is different.

the vast majority of the EU is perhaps incompatible with any workable notion of free speech, so perhaps America will have to choose whether it's worth it to sanction them into submission, or cut them off at considerable economic loss

it's not a coincidence that next to nothing is built in Europe these days, the environment is one of fear and stifling regulation and if I were to actually release anything in either AI or social networks I'd do what most of my fellow Brits/Europoors do already, which is to either sell to America or flee this place before I get big enough to show up in the euro-borg's radar

KaiserPro 2 hours ago | parent [-]

> nice try, but nobody is distributing or hosting CSAM in the current conversation

multiple agencies (Ofcom, irish police IWF, and what ever the french regulator is) have detected CSAM.

You may disagree with that statement, but bear in mind the definition of CSAM in the UK is "depiction of a child" which means that if its of a child or entirely generated is not relevant. This was to stop people claiming that massive cache of child porn they had was photoshoped.

in the USA CSAM is equally vaguely defined, but the case law is different.

> EU is perhaps incompatible with any workable notion of free speech

I mean the ECHR definition is fairly robust. But given that first amendment protection has effectively ended in the USA (the president is currently threatening to take a comedian to court for making jokes, you know, like the twitter bomb threat person in the UK) its a bit rich really. The USA is not the bastion of free speech it once was.

> either sell to America or flee this place before I get big enough to show up in the euro-borg's radar

Mate, as someone whos sold a startup to the USA, its not about regulations its about cold hard fucking cash. All major companies comply with EU regs, and its not hard. they just bitch about them so that the USA doesn't put in basic data protection laws, so they can continue to be monopolies.

direwolf20 2 hours ago | parent | prev | next [-]

It's not because it could generate CSAM. It's because when they found out it could generate CSAM, they didn't try to prevent that, they advertised it. Actual knowledge is a required compenent of many crimes.

Jordan-117 7 hours ago | parent | prev | next [-]

I could maybe see this argument if we were talking about raiding Stable Diffusion or Facebook or some other provider of local models. But the content at issue was generated not just by Twitter's AI model, but on their servers, integrated directly into their UI and hosted publicly on their platform. That makes them much more clearly culpable -- they're not just enabling this shit, they're creating it themselves on demand (and posting it directly to victims' public profiles).

disgruntledphd2 3 hours ago | parent [-]

And importantly, this is clearly published by Grok, rather than the user, so in this case (obviously this isn't the US) but if it was I'm not sure Section 230 would apply.

gordian-mind 3 hours ago | parent | prev | next [-]

This is not about AI but about censorship of a nonaligned social network. It's been a developing current in EU. They have basically been foaming at the mouth at the platform since it got bought.

direwolf20 2 hours ago | parent [-]

It's about a guy who thinks posting child porn on twitter is hilarious and that guy happens to own twitter.

If it was about blocking the social media they'd just block it, like they did with Russia Today, CUII-Liste Lina, or Pavel Durov.

mordnis an hour ago | parent [-]

He said that child pornography is funny? Do you have a link by any chance?

popalchemist 6 hours ago | parent | prev | next [-]

It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.

KaiserPro 3 hours ago | parent | next [-]

Again its all about reasonable.

Firstly does the open model explicitly/tacitly allow CSAM generation?

Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?

Thirdly, do they pull in data that is likely to allow that kind of content to be generated?

Fourthly, when they are told that this is happening, do they pull the model?

Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?

vintermann 5 hours ago | parent | prev [-]

So far, yes, but as far as I can tell their case against the AI giants aren't based on it being for-profit services in any way.

wtcactus 3 hours ago | parent | prev | next [-]

It's 2026. No common individual can be accountable for anything wrong they do. We must always find some way to blame some "corporation" or some "billionaire" or some ethnic group of people.

I wonder where all these people - and the French government - has been in the past 3 decades where kids did the same thing with Photoshop.

direwolf20 2 hours ago | parent [-]

You don't see any difference between Google Search and ChatGPT?

themafia 7 hours ago | parent | prev | next [-]

Holding corporations accountable for their profit streams is "censorship?" I wish they'd stop passing models trained on internet conversations and hoarded data as fit for any purpose. The world does not need to boil oceans for hallucinating chat bots at this particular point in history.

vintermann 5 hours ago | parent [-]

If it had been over profit, I would be all for it. I think that there are a ton of things which should be legal but not legal to make profit on.

But this is about hosting a model with allegedly insufficient safeguards against harassing and child-sexualizing images, isn't it?

direwolf20 2 hours ago | parent [-]

It's not about 'hosting a model'

vintermann 2 hours ago | parent [-]

What do you mean? What is it about then?

direwolf20 an hour ago | parent [-]

Publishing child porn

PeterStuer 5 hours ago | parent | prev | next [-]

Imagine the liabilities of camera producers. Better let those ballpoint manufacturers know they need to significantly up their legal insurance!

mnewme 5 hours ago | parent | next [-]

That is not the same.

Correct comparison would be:

You provide a photo studio with an adjacent art gallery and allow people to shoot CSAM content there and then exhibit their work.

direwolf20 2 hours ago | parent | next [-]

And the sign out front says "X-Ray camera photographs anyone naked — no age limits!"

And the camera is pointing out the window so you can use it on strangers walking by.

There is a point in law where you make something so easy to misuse that you become liable for the misuse.

In the USA they have "attractive nuisance", like building a kid's playground on top of a pit of snakes. That's so obviously a dumb idea that you become liable for the snake–bitten kids — you can't save yourself by arguing that you didn't give the kids permission to use the playground, that it's on private property, that the kids should have seen the snakes, or that it's legal to own snakes. No, you set up a situation where people were obviously going to get hurt and you become liable for the hurt.

PeterStuer 5 hours ago | parent | prev [-]

Not knowing any better, and not having seen any of the alleged images, my default guess would be they used the exact same CSAM filtering pipeline already in place on X regardless of the origin of the submitted images.

mnewme 3 hours ago | parent [-]

They obviously didn’t really implement anything as you can find that content or involuntary nudes of other people, which is also an invasion of privacy, super easily

brnt 5 hours ago | parent | prev [-]

If the camera reliably inserts racist filters and the ballpen would add hurtful words to whatever you write, indeed, let them up their legal insurance.

5 hours ago | parent | prev [-]
[deleted]