Remix.run Logo
moolcool a day ago

> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

ChrisGreenHeur 10 hours ago | parent | next [-]

adobe must be shaking in their pants

KaiserPro 5 hours ago | parent [-]

Not really, they put a shit ton of effort in to make sure you can't create any kind of nude/suggestive pictures of anyone. I imagine they have strict controls on making images of children, but I don't feel inclined to find out.

pdpi 9 hours ago | parent | prev | next [-]

I'm of two minds about this.

One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.

On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.

The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.

_trampeltier 9 hours ago | parent | next [-]

It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.

If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.

pdpi 9 hours ago | parent [-]

How does that mesh with all the safe harbour provisions we've depended on to make the modern internet, though?

mikeyouse 7 hours ago | parent | next [-]

The safe harbor provisions largely protect X from the content that the users post (within reason). Suddenly Grok/X were actually producing the objectionable content. Users were making gross requests and then an LLM owned by X, using X servers and X code would generate the illegal material and then post it to the website. The entity responsible is no longer done user but instead the company itself.

Altern4tiveAcc 2 hours ago | parent | next [-]

So, if someone hosts an image editor as web app, are they liable if someone uses that editor to create CP?

I honestly don't follow it. People creating nudes of others and using the Internet to distribute it can be sued for defamation, sure. I don't think the people hosting the service should be liable themselves, just like people hosting Tor nodes shouldn't be liable by what users of the Tor Network do.

luke5441 6 hours ago | parent | prev [-]

Yes, and that was a very stupid product decision. They could have put the image generation into the post editor, shifting responsibility to the users.

I'd guess Elon is responsible for that product decision.

pjc50 4 hours ago | parent | prev | next [-]

Note that is a US law, not a French one.

Also, safe harbor doesn't apply because this is published under the @grok handle! It's being published by X under one of their brand names, it's absurd to argue that they're unaware or not consenting to its publication.

numpad0 4 hours ago | parent | prev | next [-]

It's not like the world benefited from safe harbor laws that much. Why don't just amend them so that algorithms that run on server side and platforms that recommend things are not eligible.

direwolf20 4 hours ago | parent [-]

If you are thinking about section 230 it only applies to user–generated content, so not server–side AI or timeline algorithms.

Altern4tiveAcc 2 hours ago | parent [-]

So if a social network tool does the exact same thing, but uses the user's own GPU or NPU to generate the content instead, suddenly it's fine?

direwolf20 2 hours ago | parent [-]

If a user generates child porn on their own and uploads it to a social network, the social network is shielded from liability until they refuse to delete it.

_trampeltier 7 hours ago | parent | prev | next [-]

Before a USER did create content. So the user was / is liable. Now a LLM owned by a company does create content. So the company is liable.

hbs18 6 hours ago | parent [-]

I'm not trying to make excuses for Grok, but how exactly isn't the user creating the content? Grok doesn't have create images on its own volition, the user is still required to give it some input, therefore "creating" the content.

luke5441 6 hours ago | parent | next [-]

X is making it pretty clear that it is "Grok" posting those images and not the user. It is a separate posting that comes from an official account named "Grok". X has full control over what the official "Grok" account posts.

There is no functionality for the users to review and approve "Grok" responses to their tweets.

mbesto 2 hours ago | parent | prev | next [-]

Does an autonomous car drive the car from point A to point B or does the person who puts in the destination address drive the car?

_trampeltier 6 hours ago | parent | prev [-]

Until now, webserver had just been like a post service. Grok is more like a CNC late.

jazzyjackson 7 hours ago | parent | prev [-]

This might be an unpopular opinion but I always thought we might be better off without Web 2.0 where site owners aren’t held responsible for user content

If you’re hosting content, why shouldn’t you be responsible, because your business model is impossible if you’re held to account for what’s happening on your premises?

Without safe harbor, people might have to jump through the hoops of buying their own domain name, and hosting content themselves, would that be so bad?

direwolf20 4 hours ago | parent | next [-]

Any app allowing any communication between two users would be illegal.

expedition32 3 hours ago | parent [-]

https://en.wikipedia.org/wiki/EncroChat

You have to understand that Europe doesn't give a shit about techbro libertarians and their desire for a new Lamborghini.

direwolf20 3 hours ago | parent [-]

EncroChat was illegal because it was targeted at drug dealers, advertised for use in drug dealing. And they got evidence by texting "My associate got busted dealing drugs. Can you wipe his device?" and it was wiped. There's an actual knowledge component which is very important here.

pdpi 5 hours ago | parent | prev | next [-]

What about webmail, IM, or any other sort of web-hosted communication? Do you honestly think it would be better if Google were responsible for whatever content gets sent to a gmail address?

jazzyjackson 4 hours ago | parent [-]

Messages are a little different than hosting public content but sure, a service provider should know its customers and stop doing business with any child sex traffickers planning parties over email.

I would prefer 10,000 service providers to one big one that gets to read all the plaintext communication of the entire planet.

pdpi 3 hours ago | parent | next [-]

In a world where hosting services are responsible that way, their filtering would need to be even more sensitive than it is today, and plenty of places already produce unreasonable amounts of false positives.

As it stands, I have a bunch of photos on my phone that would almost certainly get flagged by over-eager/overly sensitive child porn detection — close friends and family sending me photos of their kids at the beach. I've helped bathe and dress some of those kids. There's nothing nefarious about any of it, but it's close enough that services wouldn't take the risk, and that would be a loss to us all.

direwolf20 4 hours ago | parent | prev [-]

They'd all have to read your emails to ensure you don't plan child sex parties. Whenever a keyword match comes up, your account will immediately be deleted.

terminalshort 4 hours ago | parent | prev [-]

You know this site would not be possible without those protections, right?

beAbU 5 hours ago | parent | prev | next [-]

I don't have an answer, but the theme that's been bouncing around in my head has been about accessibility.

Grok makes it trivial to create fake CSAM or other explicit images. Before, if someone spent a week on photoshop to do the same, It won't be Adobe that gets the blame.

Same for 3D printers. Before, anyone could make a gun provided they have the right tools (which is very expensive), now it's being argued that 3D printers are making this more accessible. Although I would argue it's always been easy to make a gun, all you need is a piece of pipe. So I don't entirely buy the moral panic against 3D printers.

Where that threshold lies I don't know. But I think that's the crux if it. Technology is making previously difficult things easier, to the benefit of all humanity. It's just unfortunate that some less-nice things have also been included.

_ph_ 2 hours ago | parent [-]

I think a company which runs a printing business would have some obligations to make sure they are not fulfilling print orders for guns. Another interesting example are printers and copiers, which do refuse to copy cash. Which is partly facilitated with the EURion constellation (https://en.wikipedia.org/wiki/EURion_constellation) and other means.

muyuu 2 hours ago | parent | prev | next [-]

i don't see any need for guardrails, other than making the prompter responsible for the output of the bot, particularly when it's predictable

you cannot elaborately use a software to produce an effect that is patently illegal and accurate to your usage, and then pretend the software is to blame

watwut 7 hours ago | parent | prev [-]

Grok is publishing the CSAM photos for everyone to see. It is used as a tool for harassment and abuse, literally.

pdpi an hour ago | parent [-]

Sure, and the fact that they haven't voluntarily put guard rails up to stop that is absolutely vile. But my personal definition of "absolutely vile" isn't a valid legal standard. So, the issue is, like I said, how do you come up with a principled approach to making them do it that doesn't have a whole bunch of unintended consequences?

trhway 10 hours ago | parent | prev | next [-]

Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?

sirnicolaz 10 hours ago | parent | next [-]

This is like comparing the danger of a machine gun to that of a block of lead.

trhway 8 hours ago | parent [-]

May be. We do have codified in law definition of machine gun which clearly separates it from a block of lead. What codified in law definitions are used here to separate photoshop from Grok in the context of those deepfakes and CSAM?

Without such clear legal definitions going after Grok while not going after photoshop is just an act of political pressure.

bootsmann 7 hours ago | parent [-]

Why do you think France doesn’t have such laws that delineate this legal definition?

What you’re implying here is that Musk should be immune from any prosecution simply because he is right wing, which…

bluescrn 7 hours ago | parent | prev | next [-]

They don’t provide a large platform for political speech.

This isn’t about AI or CSAM (Have we seen any other AI companies raided by governments for enabling creation of deepfakes, dangerous misinformation, illegal images, or for flagrant industrial-scale copyright infringement?)

direwolf20 4 hours ago | parent [-]

No because most of those things aren't illegal and most of those companies have guard rails and because a prosecution requires a much higher standard of evidence than internet shitposting, and only X was stupid enough to make their illegal activity obvious.

protocolture 9 hours ago | parent | prev | next [-]

[flagged]

trothamel 10 hours ago | parent | prev [-]

Don't forget polaroid in that.

ljsprague 5 hours ago | parent | prev | next [-]

No other "AI" companies released tools that could do the same?

hackinthebochs 4 hours ago | parent [-]

In fact, Gemini could bikinify any image just like Grok. Google added guardrails after all the backlash Grok received.

mekdoonggi 18 minutes ago | parent [-]

And they should face consequences for that, somewhat mitigated by their good faith response.

gulfofamerica a day ago | parent | prev | next [-]

[dead]

cubefox 9 hours ago | parent | prev [-]

> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.

klez 5 hours ago | parent | next [-]

That's why this is an investigation looking for evidence and not a conviction.

This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.

On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.

direwolf20 4 hours ago | parent [-]

Note that the raid itself is a punishment. It's normal for them to seize all electronic devices. How is X France supposed to do any business without any electronic devices? And even when charges are dropped, the devices are never returned.

numpad0 4 hours ago | parent | prev | next [-]

Grok do seem to have tons of useless guardrails. Reportedly you can't prompt it directly. But also reportedly they tend to go for almost nonsensically off-guardrail interpretation of prompts.

scott_w 9 hours ago | parent | prev [-]

Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...

For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.

cubefox 8 hours ago | parent [-]

First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:

https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab

The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.

scott_w 8 hours ago | parent | next [-]

For more evidence:

https://www.bbc.co.uk/news/articles/cvg1mzlryxeo

Also, X seem to disagree with you and admit that CSAM was being generated:

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

This is because of government pressure (see Ofcom link).

I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.

cubefox 5 hours ago | parent [-]

> Also, X seem to disagree with you and admit that CSAM was being generated

That post doesn't contain such an admission, it instead talks about forbidden prompting.

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.

scott_w 3 hours ago | parent [-]

> That post doesn't contain such an admission, it instead talks about forbidden prompting.

In response to what? If CSAM is not being generated, why aren't X just saying that? Instead they're saying "please don't do it."

> which contradicts your claim that there were no guardrails before.

From the linked post:

> However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards

Which was posted a full week after the initial story broke and after Ofcom started investigative action. So no, it does not contradict my point which was:

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

As you quoted.

I really can't decide if you're stupid, think I and other readers are stupid, or so dedicated to defending paedophilia that you'll just tell flat lies to everyone reading your comment.

cubefox 27 minutes ago | parent [-]

Leave your accusations for yourself. Grok already didn't generate naked pictures of adults months ago when I tested it for the first time. Clearly the "additional safeguards" are meant to protect the system against any jailbreaks.

emsign 7 hours ago | parent | prev | next [-]

> First of all, the Guardian is known to be heavily biased again Musk.

Says who? Musk?

7 hours ago | parent | prev | next [-]
[deleted]
Hikikomori an hour ago | parent | prev | next [-]

>First of all, the Guardian is known to be heavily biased again Musk.

Biased against the man asking Epstein which day would be best for the "wildest" party.

jibal 7 hours ago | parent | prev | next [-]

That is only "known" to intellectually dishonest ideologues.

xoac 8 hours ago | parent | prev [-]

boot taste good