Remix.run Logo
mnewme 5 hours ago

Good one.

No platform ever should allow CSAM content.

And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.

This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform

sunshine-o 3 hours ago | parent | next [-]

> No platform ever should allow CSAM content.

> And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.

In the 90s, the principal of a prominent school in my city was arrested for CSAM on his computer downloaded from the Internet.

As the story made the news most people were trying to wrap their head around this "Internet" thing and how it could produce CSAM material. Remember, in the 90s the "Internet" was a bit like quantum computing for most people, hard to understand how it works and only a few actually played with it.

I have no idea how that school principal downloaded the CSAM. UUCP, FTP, Usenet or maybe the brand new "World Wide Web"? But I guess the justice system had to figure out how that stuff works to prosecute him.

So the society and the state knew for at least 30 years the Internet is full of that stuff. The question is why are they so motivated to do something about it only now?

Could it be because the "web of rich and powerful pedos" is getting exposed through the Epstein affair in the last few years?

So maybe they need to pretend to crack down on the "web of poor pedos"?

pjc50 3 hours ago | parent [-]

Enforcement of anti-CSAM law has been a significant thing for a long time. It's in no way "only now". Even the "free speech" platforms banned it because they knew they would get raided otherwise. There are long standing tools for dealing with it, such as a database of known hashes of material. There's even a little box you can tick in Cloudflare to automatically check outgoing material from your own site against that database - because this is a strict liability offence, and you are liable if other people upload it to you where it can be re-downloaded.

What's new is that X automated the production of obscene or sexualised images by providing grok. This was also done in a way that confronted everyone; it's very different from a black market, this is basically a harassment tool for use against women and girls.

sunshine-o 2 hours ago | parent [-]

> What's new is that X automated the production of obscene or sexualised images by providing grok.

Yes we are now dealing with an automated Photoshop. And somehow the people in charge have decided to do something about it, probably more for political or maybe darker reasons.

So let me make a suggestion: maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?

Hit them hard at the money level... it wouldn't be more authoritarian than something like ChatControl or restricting access to VPNs.

And actually all the mechanisms are already in place to implement something like that.

cbolton an hour ago | parent | next [-]

> Yes we are now dealing with an automated Photoshop. And somehow the people in charge have decided to do something about it, probably more for political or maybe darker reasons.

I don't get what's difficult to understand or believe here. Grok causes a big issue in practice right now, a larger issue than photoshop, and it should be easy for X to regulate it themselves like the competition does but they don't, so the state intervenes.

> maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?

You're basically asking "why do a surgical strike when you can do carpet bombing"? A surgical strike is used to target the actual problem. With carpet bombing you mostly cause collateral damage.

fsloth 44 minutes ago | parent | prev [-]

> maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?

I'm sorry, but I don't understand any of the arguments above.

If we presume a dark control motivation then having shares in the entities you want to control is the best form of control there is.

gadders 3 hours ago | parent | prev | next [-]

LOL. The amount of stuff that was on Twitter before Elon bought it, or that is on BlueSky or Mastodon.

pjc50 3 hours ago | parent | next [-]

The different factors are scale (now "deepfakes" can be automatically produced), and endorsement. It is significant that all these images aren't being posted by random users, they are appearing under the company's @grok handle. Therefore they are speech by X, so it's X that's getting raided.

mnewme 2 hours ago | parent | prev [-]

There is no content like that on Bluesky nor Mastadon. Show the evidence

Zenst an hour ago | parent | next [-]

https://bsky.social/about/blog/01-17-2025-moderation-2024

"In 2024, Bluesky submitted 1,154 reports for confirmed CSAM to the National Centre for Missing and Exploited Children (NCMEC). Reports consist of the account details, along with manually reviewed media by one of our specialized child safety moderators. Each report can involve many pieces of media, though most reports involve under five pieces of media."

If it wasn't there, there would be no reports.

mnewme an hour ago | parent [-]

But that is the difference, they actually do something against it.

GaryBluto 38 minutes ago | parent | prev [-]

> There is no content like that on [...] Mast(o)don.

How can you say that nobody is posting CSAM on a massive decentralized social network with thousands of servers?

ReptileMan 2 hours ago | parent | prev | next [-]

I disagree. Prosecute people that use the tools, not the tool makers if AI generated content is breaking the law.

A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.

mnewme 2 hours ago | parent | next [-]

I agree that users who break the law must be prosecuted. But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.

We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.

ReptileMan 2 hours ago | parent | next [-]

>But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.

Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.

The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.

We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.

szmarczak an hour ago | parent | next [-]

You are literally trolling. No one is banning AI entirely. However AI shouldn't spit out adult content. Let's not enable people harm others easily with little to no effort.

mnewme 2 hours ago | parent | prev [-]

That is not the argument. No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface, but X provides not only an AI model, but a platform and distribution as well, so that is inherently different

ReptileMan 2 hours ago | parent [-]

No it is not. X is dumb pipe. You have humans on both ends. Arrest them, summary execute them whatever. You go after X because it is a choke point and easy.

mnewme 2 hours ago | parent [-]

First you argue about the model, now the platform. Two different things.

If a platform encourages and doesn’t moderate at all, yes we should go after the platform.

Imagine a newspaper publishing content like that, and saying they are not responsible for their journalists

JustRandom 30 minutes ago | parent | prev [-]

[dead]

moorebob an hour ago | parent | prev | next [-]

But how would we bring down our political boogieman Elon Musk if we take that approach?

Everything I read from X's competitors in the media tells me to hate X, and hate Elon.

If we prosecute people not tools, how are we going to stop X from hurting the commercial interests of our favourite establishment politicians and legacy media?

mnewme an hour ago | parent [-]

People defending allowing CSAM content was definitely not on my bingo card for 2026.

kakacik an hour ago | parent | prev | next [-]

You won't find much agreement with your opinion amongst most people. No matter of many "this should and this shouldn't" is written into text by single individual, thats not how morals work.

thrance an hour ago | parent | prev [-]

How? X is hostile to any party attempting to bring justice to its users that are breaking the law. This is a last recourse, after X and its owner stated plainly that they don't see anything wrong with generating CSAM or pornographic images of non-consenting people, and that they won't do anything about it.

ReptileMan an hour ago | parent [-]

Court order, ip of users, sue the users. It is not X job to bring justice.

thrance an hour ago | parent [-]

X will not provide these informations to the French Justice System. What then? Also insane that you believe the company that built a "commit crime" button bears no responsibility whatsoever in this debacle.

exodust 5 hours ago | parent | prev | next [-]

I remember when CSAM meant actual children not computer graphics.

Should platforms allow violent AI images? How about "R-Rated" violence like we see in popular movies? Point blank executions, brutal and bloody conflict involving depictions of innocent deaths, torment and suffering... all good? Hollywood says all good, how about you? How far do you take your "unacceptable content" guidance?

pjc50 3 hours ago | parent | next [-]

> How about "R-Rated" violence like we see in popular movies?

Movie ratings are a good example of a system for restricting who sees unacceptable content, yes.

ascagnel_ 20 minutes ago | parent [-]

More to the point, now that most productions are using intimacy coordinators, there's a degree of certainty around the consent of R-rated images.

There's basically no consent with what Grok is doing.

myrmidon 4 hours ago | parent | prev | next [-]

There are multiple valid reasons to fight realistic computer-generated CSAM content.

Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.

Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.

Eisenstein 2 hours ago | parent [-]

> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder

I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized.

> specifically in many of the grok cases it harms young victims that were used as templates for the material.

What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them?

Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally.

myrmidon an hour ago | parent [-]

If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").

You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.

> What is the criteria for this?

My criteria would be victims suffering personally from the generated material.

The "no harm" argument only really applies if victims and their social bubble never find out about the material (but that did happen, sometimes intentionally, in many cases).

You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.

Eisenstein 44 minutes ago | parent [-]

> You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.

But that reason is highly problematic. Laws should be able to stand on their own for their reasons. Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.

> You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.

I thought you were saying that the kids who were in the dataset that the model was trained on would be harmed. I agree with what I assume you meant based on your reply, which is people who had their likeness altered are harmed.

myrmidon 5 minutes ago | parent [-]

> Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.

Yes. I almost completely agree with your outlook, but I think that many of our laws trade such individual freedoms for better society-wide outcomes, and those are often good tradeoffs.

Just consider gun legislation, driving licenses, KYC laws in finance, etc: Should the state have any business interfering there? I'd argue in isolation (ideally) not; but all those lead to huge gains for society, making it much less likely to be murdered by intoxicated drivers (or machine-gunners) and limit fraud, crime and corruption.

So even if laws look kinda bad from a purely theoretical-ethics point of view it's still important to look at the actual effects that they have before dismissing them as unjust in my view.

KaiserPro 4 hours ago | parent | prev | next [-]

> I remember when CSAM meant actual children not computer graphics.

The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.

master-lincoln an hour ago | parent [-]

There still needs to be sexual abuse depicted, no? Just naked kids should not be an issue, right?

2 hours ago | parent | prev | next [-]
[deleted]
thrance an hour ago | parent | prev | next [-]

Ok, imagine your mom, sister or daughter is using X. Some random guy with an anime profile picture and a neonazi bio comes in, asks grok to make a picture of them in bikini for the whole world to see, and the bot obliges. Do you see the issue now? Because that happened to literally millions of people last month.

master-lincoln an hour ago | parent | next [-]

A generated picture of a family member in a bikini is an issue?

I don't see it...

mnewme 44 minutes ago | parent [-]

Because you are apparently a man and have never been harassed in your life

master-lincoln 40 minutes ago | parent [-]

I have been harassed. Women in bikinis are normal where I live.

mnewme 29 minutes ago | parent | next [-]

There is a difference between running around in a bikini and people creating sexy pictures of yourself without consent.

You do understand that?

b40d-48b2-979e 18 minutes ago | parent | prev [-]

People like you are why women choose the bear.

mnewme an hour ago | parent | prev [-]

Exactly! This should not be ok

cess11 4 hours ago | parent | prev | next [-]

[flagged]

mnewme 3 hours ago | parent | prev [-]

What the hell?

As a father there shouldn’t be any CSAM content anywhere.

And think about that it is already proven these models apparently had CSAM content in their training data.

Also what about the nudes of actual people? That is invasion of privacy

I am shocked that we are even discussing this.

moorebob an hour ago | parent | prev | next [-]

Making Silicon Valley the judge, jury and executioner of pedos seems, at best, a dereliction of duty by the real authorities, and at worst, a very dark and dystopian path to opaque and corruptible justice.

X should identify those users who are abusing its tools to make CSAM (and I suspect those users are mostly leftie Musk-haters trying to create vexatious evidence against X), and then X should pass that information to the authorities so the proper legal process can be followed.

mnewme an hour ago | parent [-]

They are not doing anything that is the problem. X just doesn’t care at all.

But they are happy to censor for countries like turkey

Hypocrisy level 1 million

darkport 3 hours ago | parent | prev [-]

Is there any evidence of CSAM being generated by Grok? Because I’ve never seen any and I use X every day.

Sure, I saw the bikini pics which I agree is weird and shouldn’t be allowed but it’s not CSAM under a legal definition.

direwolf20 3 hours ago | parent [-]

Are you asking to be provided links to child porn?