Remix.run Logo
exodust 5 hours ago

All because "AI nudes"? Seems heavy-handed, almost like the controversy over naughty images has received a state-sponsored outrage boost for other reasons.

"Shocking Grok images"... really? It's AI. We know AI can make any image. The images are nothing but fake digital paintings that lose all integrity as quickly as they're generated.

Beyond comedic kicks for teenage boys, they're inconsequential for everyone else. But nevermind that, hand me a pitchfork and pre-fabricated sign and point me to the nearest anti-Grok protest.

beAbU 4 hours ago | parent | next [-]

It has always been illegal and morally reprehensible to create, own, distribute or store sexually explicit material that represents a real person without their consent, regardless if they are underage or not.

Grok is a platform that is enabling this en masse. If xAI can't bring in guardrails or limit who can access these capabilities, then they deserve what's coming to them.

GaryBluto 41 minutes ago | parent | next [-]

>It has always been illegal and morally reprehensible to create, own, distribute or store sexually explicit material that represents a real person without their consent, regardless if they are underage or not.

Arguably morally reprehensible but it has not always been illegal (and still isn't in many places) if you're talking about images of adults.

actionfromafar 2 hours ago | parent | prev | next [-]

I think you are going a bit too far.

Let's start from the beginning, create and own:

You're sketching out some nude fanart on a piece of paper. You created that and own that. Thas has always been illegal?!

(This is apart from my feelings on Mechahitler/Grok, which aren't positive.)

reddalo 2 hours ago | parent | next [-]

You can _almost_ do anything you want in the privacy of your home; but in this case Twitter was actively and directly disseminating pictures publicly on their platform.

kimixa an hour ago | parent [-]

And profiting from it, though less directly than "$ for illegal images". Even if it wasn't behind a paywall (which it mostly is) driving more traffic for more ads for more income is still profiting from illegal imagery.

andrepd 2 hours ago | parent | prev [-]

> You're sketching out some nude fanart on a piece of paper.

Is twitter a piece of paper in your desk? No, it's not.

actionfromafar an hour ago | parent [-]

Right.

OP had "It has always been illegal and morally reprehensible to create, own, distribute or store "

It would make more sense then to instead say:

"It has always been illegal and morally reprehensible to distribute "

andrepd an hour ago | parent [-]

Again, AI deepfakes are not sketches in a piece of paper. There's a massive difference between drawing your coworker naked on a piece of paper (weird, but certainly not criminal), and going "grok generate a video of my coworker bouncing on my d*ck". Not to mention the latter is generated and stored god knows where, against the consent of the depicted person.

master-lincoln 2 hours ago | parent | prev [-]

In which broken society do you live where this is true? I would say drawing sexually explicit pictures of real persons without their consent and keeping them in your drawer is neither illegal nor morally reprehensible in most of the world.

I am with you on publishing these...

ed_elliott_asc 4 hours ago | parent | prev | next [-]

At my kids school the children have been using grok to create pics of other children without clothes on - chatgpt etc won’t let you do that - grok needs some controls and x seem unable to do that themselves.

bmelton 22 minutes ago | parent | next [-]

I just tried it out, and I got "Content moderated - try a different idea"

I'm sure that if I cared I could bypass it, but at what level of prompt hacking does the onus shift from the system to the users?

YetAnotherNick 4 hours ago | parent | prev [-]

What would raiding the office achieve in this case apart from just showing off power.

myrmidon 4 hours ago | parent | next [-]

In such a case specifically: Uncover internal communication that shows the company was aware of the problem and ignored it, which presumably affects liability a lot.

ndr 4 hours ago | parent [-]

I wonder what they will find. They seemed to have acknowledged working on the problem before.

https://x.com/elonmusk/status/2011432649353511350

mortarion 3 hours ago | parent | prev | next [-]

This is the cyber crime unit. They will exfiltrate any data they want. They will use employee account to pivot into the rest of the X network. They don't just go in, grab a couple of papers, laptops and phones. They hook into the network and begin cracking.

stuaxo 4 hours ago | parent | prev | next [-]

Why are you defending X here?

It sounds like they are following due process.

pjc50 4 hours ago | parent | prev | next [-]

Normally getting raided by the police causes people and organizations to change their behavior.

owebmaster 3 hours ago | parent | prev [-]

Enforcing the law usually is an inhibitor for criminals

actionfromafar 2 hours ago | parent [-]

But, isn't that bad for the criminals?

ImPleadThe5th 4 hours ago | parent | prev | next [-]

How about you come back when your daughter has a fake AI nude passed around school.

wtcactus 3 hours ago | parent | next [-]

So, when they were doing it for the last 3 decades in Photoshop (I was in high-school and this already existed) you would be just fine with the tool being used to do it and with the boys and the school?

Is that your argument? Did you ever expect the government to go after Adobe for "enabling" this?

sam-cop-vimes 2 hours ago | parent [-]

Not the same - the barrier to entry was too high. Most people don't have the skills to edit photos using Photoshop. Grok enabled this to happen to scale for users who are complete non techies. With grok, anyone who could type in a half-coherent sentence in English could generate and disseminate these images.

Edit: clarified the last sentence

wtcactus 2 hours ago | parent [-]

Sorry, but barrier to entry doesn't seem like a very good legal excuse. Goes in the same direction as NY attempts to ban 3D printing because - supposedly - it enables people to more easily make guns.

This is a political action by the French... slowly loosing their relevance, even inside the EU. Nothing else.

janalsncm 2 hours ago | parent [-]

I see what you’re getting at. You’re trying to draw a moral equivalence between photoshop and grok. Where that falls flat for me is the distribution aspect: photoshop would not also publish and broadcast the illegal material.

But police don’t care about moral equivalence. They care about the law. For the legal details we would need to consult French law. But I assume it is illegal to create and distribute the images. Heck, it’s also probably against Twitter’s TOS too so by all rights the grok account should be banned.

> This is a political action by the French

Maybe. They probably don’t like a foreign company coming in, violating their children, and getting away with it. But what Twitter did was so far out of line that I’d be shocked if French companies weren’t treated the same way.

wtcactus 2 hours ago | parent [-]

> But I assume it is illegal to create and distribute the images.

I very much so expect it to be illegal to distribute the images, of course (creating them, not so much).

But the illegality, in a sane world (and until 5 minutes ago) used to be attached to the person actually distributing them. If some student distributes fake sexualized images of their colleague, I very much expect the perpetrator to be punished by the law (and by the school, since we are at it).

manfre an hour ago | parent [-]

Creating, possessing, and distributing CSAM is illegal in the US and many other countries. Can you explain why you think it should be legal to create something that is illegal to possess or distribute?

ljsprague 4 hours ago | parent | prev | next [-]

Is that what it would have taken for you to support the Patriot Act?

joe_mamba 4 hours ago | parent | prev | next [-]

In your hypothetical scenario, why aren't the school kids making and distributing fake nudes of his daughter be the ones getting in trouble?

Have we a outsourced all accountability for the crimes of humans to AI now?

ImPleadThe5th 4 hours ago | parent | next [-]

It's not hypothetical. And in fact the girl who was being targeted was expelled not the boys who did it [1].

Those boys absolutely should be held accountable. But I also don't think that Grok should be able to quickly and easily generate fake revenge porn for minors.

[1] https://www.nbcnewyork.com/news/national-international/girl-...

joe_mamba 4 hours ago | parent | next [-]

>And in fact the girl who was being targeted was expelled not the boys who did it [1].

And the AI is at fault for this sentencing, not the school authorities/prosecutors/judges dishing justice? WTF.

How is this an AI problem and not a legal system problem?

pseudony 3 hours ago | parent | next [-]

You can’t “undo” a school shooting, for instance, so we tend to have gun laws.

You can’t just “undo” some girl being harassed by AI generated nude photos of her, so we…

Yes, we should have some protections or restrictions on what you can do.

You may not understand it, either because you aren’t a parent or maybe just not emotionally equipped to understand how serious this actually can be, but your lack of comprehension does not render it a non-issue.

Having schools play whack-a-mole after the photos are shared around is not a valid strategy. Never mind that schools primarily engage in teaching, not in investigation.

As AI-generated content gets less and less distinguishable from reality, these incidents will have far worse consequences and putting such power in the hands of adolescents who demonstrably don’t have sound judgment (hence why they lack many other rights that adults have) is not something most parents are comfortable with - and I doubt you’ll find many teachers, psychiatrists and so on who would support your approach either.

joe_mamba 2 hours ago | parent [-]

>You can’t just “undo” some girl being harassed by AI generated nude photos of her, so we…

No, but if you send those people who made and distributed the AI nude of her to jail, these problems will virtually disappear overnight, because going to jail is a hugely effective deterrent for most people.

But if you don't directly prosecute the people doing it, and instead just ban Grok AI, then those people will just use other AI tools, outside of US jurisdiction, to do the same things and the problem persists.

And the issues keeps persisting, because nobody ever goes to jail. Everyone only gets a slap on the wrist, deflects accountability by blaming the AI, so the issue keeps persisting and more people end up getting hurt because those who do the evil are never held directly accountable.

Obviously Grok shouldn't be legally allowed to generate fakes nudes of actual kids, but in case such safeguards can and will be bypassed, that doesn't absolve the humans from being the ones knowingly breaking the law to achieve a nefarious goal.

pseudony 2 hours ago | parent | next [-]

That’s just not how the world works.

Youths lack judgment, so they can’t vote, drink, drive, have sex or consent to adults.

A 14-year-old can’t be relied to understand the consequences of making nudes of some girl.

Beyond that, we regulate guns, speed limits and more according to principles like “your right to swing your fist ends at my nose”.

We do that not only because shoving kids into jails is something we want to avoid, but because regulating at the source of the problem is both more feasible AND heads off a lot of tragedy.

And again, you fail to acknowledge the investigative burden you put on society to discover who originated the photo after the fact, and the trauma to the victim.

If none of that computes for you, then I don’t know what to say except I don’t place the right to generate saucy images highly enough to swarm my already overworked police with requests to investigate who generated fake underage porn.

joe_mamba an hour ago | parent [-]

>A 14-year-old can’t be relied to understand the consequences of making nudes of some girl.

Teenagers do stupid shit all the time. But they still get prosecuted or convicted when they do crimes. They go to juvy or their parents get punished. Being 14 is not a get out of jail free card.

vanviegen 37 minutes ago | parent [-]

In that case, why not allow teenagers to carry firearms as well? Sure, some will die, others will go to jail, but at least that ought to teach the rest of them a lesson, right?

tecoholic 2 hours ago | parent | prev [-]

The way you are arguing makes it really hard to understand what you are trying to say. I am guessing you are upset that non-human entity is being used as a boogie man while the actual people are going free? But your argumentation reads like someone who is very upset at AI producing CSAM is being persecuted. I won’t be surprised if people think you are defending CSAM.

In good faith, a few things - AI generated imagery and Photoshop are not the same. If someone can mail Adobe and a photo of a kid and ask for a modified one and Adobe sent it back, yes Adobe’s offices will be raided. That’s the equivalent here. It’s not a tool. It’s a service. You keep using AI, without taking a moment to give the “intelligence” any thought.

Yes, powerful people are always going to get by, as you say. And the laws & judicial system are for the masses. There is definitely unfairness in it. But that doesn’t change anything here - this is a separate conversation.

If not Grok then someone else will do it - is a defeatist argument that can only mean it can’t be controlled so don’t bother. This point is where you come across as a CSAM defender. Govt’s will/should do whatever they can to make society safe, even if it means playing whack a mole. Arguing that’s “not efficient” is frankly confusing. Judicial system is about fairness and not efficiency.

frankly, I think you understand all of this and maybe got tunnel visioned in your anger at the unfairness of people scapegoating technology for its failings. That’s the last thing I want to point out, raiding an office is taking action against the powerful people who build systems without accountability. They are not going to sit the model down and give a talking to. The intention is to identify the responsible party that allows this to happen.

sam-cop-vimes 2 hours ago | parent | prev | next [-]

You cannot offload all problems to the legal system. It does not have the capacity. Legal issues take time to resolve and the victims have to have the necessary resource to pursue legal action. Grok enabled abuse at scale, which no legal system in the world can keep up with. It doesn't need explanation that generating nudes of people without their consent is a form of abuse. And if the legal system cannot keep up with protecting victims, the problem has to be dealt with at source.

joe_mamba 2 hours ago | parent [-]

>You cannot offload all problems to the legal system. It does not have the capacity.

You definitely can. You don't have to prosecute and send a million people to jail for making and distributing fake AI nudes, you just have to send a couple, and then the problem virtually goes away.

People underestimate how effective direct personal accountability is when it comes with harsh consequences like jail time. That's how you fix all issues in society and enforce law abiding behavior. You make the cost of the crime greater than the gains from it, then crucify some people in public to set an example for everyone else.

Do people like doing and paying their taxes? No, but they do it anyway. Why is that? Because THEY KNOW that otherwise they go to jail. Obviously the IRS and legal system don't have the capacity to send the whole country to jail if they were to stop paying taxes, but they send enough to jail in order for the majority of the population to not risk it and follow the law.

It's really that simple.

TheOtherHobbes 2 hours ago | parent | next [-]

None of what you've said is true. Deterrence is known to have a very limited effect on behaviour.

In this case, it's far simpler to prosecute the source.

joe_mamba 2 hours ago | parent [-]

>None of what you've said is true.

Everything I said is true.

>Deterrence is known to have a very limited effect on behaviour.

It is insanely effective when actually enforced. It's not effective when the goal is to make it seem ineffective so that people can evade the system.

>In this case, it's far simpler to prosecute the source.

The "source" is a tool that tomorrow can be in Russia or CHina and you can't prosecute.

panda-giddiness 25 minutes ago | parent | prev | next [-]

> People underestimate how effective direct personal accountability is when it comes with harsh consequences like jail time. That's how you fix all issues in society and enforce law abiding behavior. You make the cost of the crime greater than the gains from it, then crucify some people in public to set an example for everyone else

And yet criminals still commit crimes. Obviously jail is not the ultimate deterrent you think it is. Nobody commits crimes with the expectation that they'll get caught, and if you only "crucify some people", then most criminals are going to (rightfully) assume that they'll be one of the lucky ones.

ljm an hour ago | parent | prev | next [-]

> You don't have to prosecute and send a million people to jail for making and distributing fake AI nudes, you just have to send a couple, and then the problem virtually goes away.

I genuinely cannot tell if you are being comically naïve or extremely obtuse here. You need only look at the world around you to see that this does not, and never will, happen.

As another commenter said, this argument is presenting itself as apologia for CSAM and you come across as a defender of the right for a business to create and publish it. I assume you don't actually believe that, but the points you made are compatible.

It is as much the responsibility of a platform for providing the services to create illegal material, and also distributing said illegal material. That it happens to be an AI that generates the imagery is not relevant - X and Grok are still the two services responsible for producing and hosting it. Therefore, the accountability falls on those businesses and its leadership just as much as it does the individual user, because ultimately they are facilitating it.

To compare to other situations: if a paedophile ring is discovered on the dark web, the FBI doesn't just arrest the individuals involved and leave the website open. It takes the entire thing down including those operating it, even if they themselves were simply providing the server and not partaking in the content.

everettp an hour ago | parent | prev [-]

Actually research shows people regularly overestimate how effective deterrence-based punishment is. Particularly for children and teenagers. How many 14-year-olds do you really think are getting prosecuted and sent to jail for asking Grok to generate a nude of their classmate..? How many 14-year-olds are giving serious thought about their long-term future in the moment they are typing a prompt into to Twitter..? Your argument is akin to suggesting that carmakers should sell teenagers cars to drive, because the teenager can be punished if they cause an accident.

_pdp_ an hour ago | parent | prev | next [-]

You know there is no such thing as the world police or something of that sort.

If the perpetrator is in another country / jurisdiction it is virtually impossible to prosecute let alone sentence.

It is 100% regulatory problem in this case. You just cannot allow this content to be generated and distributed in the public domain by anonymous users. It has nothing to do with free speech but with civility and common understanding of what is morally wrong / right.

Obviously you cannot prevent this in private forums unless it is made illegal which is a completely different problem that requires a very different solution.

anonymous908213 4 hours ago | parent | prev | next [-]

Have you considered that it is possible for two things to be problems?

joe_mamba 3 hours ago | parent [-]

No, because the comment is in bad faith, it just introduced an unrelated issue (poor sentencing from authorities) as an argument for the initial issue we are discussing (AI nudes), derailing the conversation, and then using the new issue they themselves introduced to legitimize their poor argument when one has nothing to do with the other and both can be good/bad independently of each other.

I don't accept this as good faith argumentation nor does HN rules.

Phelinofist 3 hours ago | parent | next [-]

You are the only one commenting in bad faith, by refusing to understand/acknowledging that the people using Grok to create such pictures AND Grok are both part of the issue. It should not be possible to create nudes of minors via Grok. Full stop.

joe_mamba 3 hours ago | parent [-]

>You are the only one commenting in bad faith

For disagreeing on the injection of offtopic hypothetical scenarios as an argument derailing the main topic?

>It should not be possible to create nudes of minors via Grok.

I agree with THIS part, I don't agree with the part where the main blame is on the AI, instead of on the people using it. That's not a bad faith argument, it's just My PoV.

If Grok disappears tomorrow, there will be other AIs from other parts of the world outside of US/EU jurisdiction, that will do the same since the cat is out of the bag and the technical barrier to entry is dropping fast.

Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?

pka an hour ago | parent | next [-]

> Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?

Both, obviously. For example, you go after drug distributors and drug producers. Both approaches are effective in different ways, I am not sure why you are having such trouble understanding this.

TheOtherHobbes 2 hours ago | parent | prev [-]

This is textbook whataboutery. The law is perfectly clear on this, and Musk is liable.

Other AIs have guardrails. If Musk chooses not to implement them, that's his personal irresponsibility.

Bluescreenbuddy an hour ago | parent | prev [-]

Then log off.

lukan 4 hours ago | parent | prev [-]

Grok made the pictures.

The school authorities messed up.

Both are accuntable.

joe_mamba 3 hours ago | parent [-]

>Grok made the pictures.

Correction: kids made the pictures. Using Grok as the tool.

If kids were to "git gud" at photoshop and use that to make nudes, would you arrest Adobe?

defrost 3 hours ago | parent | next [-]

In the spirit of shitty "If's ..."

If kids ask a newspaper vendor for cigarettes and he provides them .. that's a no-no.

If kids ask a newspaper vendor for nudes and he provides them .. that's a no-no.

If kids ask Grok for CSAM and it provides them .. then ?

joe_mamba 3 hours ago | parent | next [-]

The existence and creation of cigarettes and adult nude magazines is fully legal, only their sale is illegal to kids. If kids try to illegally obtain those LEGAL items, it doesn't make the existence of those items illegal, just the act of sale to them.

Meanwhile, the existence/creation CSAM of actual people isn't legal, for anyone no matter the age.

notachatbot123 3 hours ago | parent | next [-]

Grok created those images.

pasc1878 2 hours ago | parent | prev [-]

And when the magazines get sold who is breaking the law and gets convicted it is not the children but the shop supplying the children.

So when Grok provides the illegal pictures then by the same logic it is Grok that is breaking the law.

abc123abc123 2 hours ago | parent | prev [-]

If parents or school let children play with explosives or do drugs and they get hurt, that's a no-no.

If parents or school let children roam the internet unsupervised... then?

defrost 2 hours ago | parent | next [-]

> If parents or school let children play with explosives or do drugs

The explosive sellers that provide explosives to someone without a certification (child or adult) get in trouble (in this part of the world) .. regardless of whether someone gets hurt (although that's an upscale).

If sellers provide ExPo to certified parents and children get access .. that's on the parents.

In that analagy of yours, if grok provided ExPo or CSAM to children .. that's a grok problem,

(Ditto drugs).

It's on the provider to children. ie Grok.

actionfromafar 2 hours ago | parent | prev [-]

If MechaGrok sells explosives to children, that's a go-go?

tene80i 3 hours ago | parent | prev | next [-]

You're suggesting an inconsistency where there isn't one. A country can ban guns and allow rope, even though both can kill.

joe_mamba 3 hours ago | parent [-]

> A country can ban guns and allow rope, even though both can kill.

That's actually a good argument. And that's how the UK ending up banning not just guns, but all sorts of swords, machetes and knives, meanwhile the violent crime rates have not dropped.

So maybe dangerous knives are not the problem, but the people using them to kill other people. So then where do we draw the line between lethal weapons and crime correlation. At which cutting/shooting instruments?

Same with software tools, that keep getting more powerful with time lowering the bar to entry for generating nudes of people. Where do we draw the line on which tools are responsible for that instead of the humans using them for it?

tene80i 2 hours ago | parent | next [-]

You’re absolutely right that it is a difficult question where to draw the line. Different countries will do it differently according to their devotion to individual freedoms vs communal welfare.

The knife (as opposed to sword) example is interesting. In the U.K. you’re not allowed to sell them to children. We recognise that there is individual responsibility at play, and children might not be responsible enough to buy them, given the possible harms. Does this totally solve their use in violent crime? No. But if your alternative is “it’s up to the individuals to be responsible”, well, that clearly doesn’t work, because some people are not responsible. At a certain point, if your job is to reduce harm in the population, you look for where you can have a greater impact than just hoping every individual follows the law, because they clearly don’t. And you try things even if they don’t totally solve the problem.

And indeed, the same problem in software.

As for the violent crime rates in the U.K., I don’t have those stats to hand. But murder is at a 50 year low. And since our post-Dunblane gun laws, we haven’t had any school shootings. Most Britons are happy with that bargain.

jen20 3 hours ago | parent | prev [-]

> meanwhile the violent crime rates have not dropped.

The rate of school shootings has dropped from one (before the implementation of recommendations from the Cullen report) to zero (subsequently). Zero in 29 years - success by any measure.

If you choose to look at _other_ types of violent crime, why would banning handguns have any effect?

> Where do we draw the line on which tools are responsible for that instead of the humans using them for it?

You can ban tools which enable bad outcomes without sufficient upside, while also holding the people who use them to account.

lukan 3 hours ago | parent | prev [-]

"Correction: kids made the pictures. Using Grok as the tool."

No. That is not how AI nowdays works. Kids told the tool what they want and the tool understood and could have refused like all the other models - but instead it delivered. And it only could do so because it was specifically trained for that.

"If kids were to "git gud" at photoshop "

And what is that supposed to mean?

Adobe makes general purpose tools as far as I know.

joe_mamba 3 hours ago | parent [-]

You're beating it around the bush not answering the main question.

Anyone skilled at photoshop can do fake nudes as good or even better than AI, including kids (we used it to make fun fakes of teachers in embarrassing situations back in the mid 00s and distribute them via MSN messenger), so then why is only the AI tool the one to blame for what the users do, but not Photoshop if both tools can be used to do the same thing?

People can now 3D print guns at home, or at least parts that when assembled can make a functioning firearm. Are now 3D printer makers to blame if someone gets killed with a 3D printed gun?

Where do we draw the line at tools in terms of effort required, between when the tool bares the responsibility and not just the human using the tool to do illegal things? This is the answer I'm looking for and I don't think there is an easy one, yet people here are too quick to pin blame based on their emotional responses and subjective biases and word views on the matter and the parties involved.

cbolton an hour ago | parent | next [-]

> Anyone skilled at photoshop

So let's say there are two ways to do something illegal. The first requires skills from the perpetrator, is tricky to regulate, and is generally speaking not a widespread issue in practice. The second way is a no brainer even for young children to use, is easy to regulate, and is becoming a huge issue in practice. Then it makes sense to regulate only the second.

> People can now 3D print guns at home, or at least parts that when assembled can make a functioning firearm. Are now 3D printer makers to blame if someone gets killed with a 3D printed gun?

Tricky question, but a more accurate comparison would be with a company that runs a service to 3D print guns (= generating the image) and shoot with them in the street (= publishing on X) automatically for you and keeps accepting illegal requests while the competitors have no issue blocking them.

> Where do we draw the line at tools in terms of effort required, between when the tool bares the responsibility and not just the human using the tool to do illegal things?

That's also a tricky question, but generally you don't really need to know precisely where to draw the line. It suffices to know that something is definitely on the wrong side of the line, like X here.

szmarczak 2 hours ago | parent | prev [-]

A 3D printer needs a blueprint. AI has all the blueprints built-in. It can generalize, so the blueprints cannot simply be erased, however at least what we can do is forbid generation of adult content. Harm should be limited. Photoshop requires skill and manual work, that's the difference. In the end, yes, people are the ones who are responsible for their actions. We shouldn't let kids (or anyone else) harm others with little to no effort. Let's be reasonable.

whywhywhywhy 3 hours ago | parent | prev | next [-]

This happens all the time with abusive children in schools, they're rarely punished at all even with extreme abuse and violence.

jgalt212 41 minutes ago | parent | prev [-]

The police got it right.

> When the sheriff's department looked into the case, they took the opposite actions. They charged two of the boys who'd been accused of sharing explicit images — and not the girl.

anonymous908213 4 hours ago | parent | prev | next [-]

Punishing kids after the fact does not stop the damage from occurring. Nothing can stop the damage that has already occurred, but if you stop the source of the nudes, you can stop future damage from occurring to even more girls.

joe_mamba 4 hours ago | parent [-]

>Punishing kids after the fact does not stop the damage from occurring.

Banning AI doesn't stop the damage from occurring. Bullies at school/college have been harassing their victims, often to suicide for decades/centuries before AI.

anonymous908213 4 hours ago | parent | next [-]

I'm sorry, did the article or anyone in this subthread suggest banning AI? That seems like quite a non-sequitur. I'm pretty sure the idea is to put a content filter on an online platform for one very specific kind of already-illegal content (modified nude images of real people, especially children), which is a far cry from a ban. Nothing can stop local diffusion or Photoshop, of course, but the hardware and technical barriers are so much higher that curtailing Grok would probably cut off 99% or more of the problem material. I suppose you'll tell me if any solution is not 100% effective we should do nothing and embrace anarchy?

Edit for the addition of the line about bullying: "Bullying has always happened, therefore we should allow new forms of even worse bullying to flourish freely, even though I readily acknowledge that it can lead to victims committing suicide" is a bizarre and self-contradictory take. I don't know what point you think you're making.

owebmaster 3 hours ago | parent | prev | next [-]

You are defending child pornography en mass and for profit? Is it a new low for HN?

direwolf20 3 hours ago | parent | next [-]

Y Combinator supports doing anything that makes money

joe_mamba 3 hours ago | parent | prev [-]

I'm not defending CP, WTF is wrong with you? You're just hallucinating/making stuff up in bad faith.

wizzwizz4 4 hours ago | parent | prev | next [-]

Child sexual abuse material is literally in the training sets. Saying "banning AI" as though it's all the same thing, and all morally-neutral, is disingenuous. (Yes, a system with both nudity and children in its dataset might still be able to produce such images – and there are important discussions to be had about that – but giving xAI the benefit of equivocation here is an act of malice.)

expedition32 2 hours ago | parent | prev [-]

Nobody wants to ban AI they want to regulate it. Which is what we do with all new technology.

To paraphrase "your tech bros were so preoccupied with whether or not they could they never considered if they should"

stuaxo 4 hours ago | parent | prev | next [-]

They may well get in trouble, but in that takes time, in the meantime photos will have been seen by most kids in school + you might get a year of bullying.

Education might be so disrupted you have to change schools.

saubeidl 3 hours ago | parent | prev [-]

This is accountability for the crimes of humans.

The crime is creating a system that lets schoolboys create fake nudes of other minors.

You don't just get to build a CSAM-generator and then be like "well I never intended for it to be used...".

The humans running a company are liable for the product that their company builds, easy as that.

joe_mamba 3 hours ago | parent [-]

>The crime is creating a system that lets schoolboys create fake nudes of other minors.

So like Photoshop? Do you want to raid Adobe's HQ?

saubeidl 3 hours ago | parent [-]

Does Photoshop have a "let me jerk off to this minor" button?

joe_mamba an hour ago | parent [-]

Why do you want to jerk off to a minor? Sounds like you should get a visit from the police for asking a tool to do that for you.

If I ask you to go kill someone and you do it, in the eyes of the law I am just as guilty as you even though I never actually touched the person.

If you ask for CP, you're still just as guilty even if you're not the one making it.

saubeidl an hour ago | parent [-]

I don't want to.

But I don't want others to be easily able to either.

In your scenario, yes, you are guilty as well. But so is the one that actually did the deed, i.e. Grok in this case.

You're arguing my point for me. Just because you do something for someone else doesn't mean you're absolved of responsibility.

You can't build a tool with a "create child porn" button and then expect not to get into trouble for helping people make child porn.

BlackFly 3 hours ago | parent | prev [-]

I really find this kind of appeal quite odious. God forbid that we expect fathers to have empathy for their sons, sisters, brothers, spouses, mothers, fathers, uncles, aunts, etc. or dare we hope that they might have empathy for friends or even strangers? It's like an appeal to hypocrisy or something. Sure, I know such people exist but it feels like throwing so many people under the bus just to (probably fail) to convince someone of something by appealing to an emotional overprotectiveness of fathers to daughters.

You should want to protect all of the people in your life from such a thing or nobody.

Leynos 30 minutes ago | parent | prev | next [-]

Yes. All because sexual harassment and images depicting child sexual abuse.

Bluescreenbuddy an hour ago | parent | prev | next [-]

If you bothered to do any research instead of downplaying it, you’d know why. It’s embarrassing you even typed that out

sapphicsnail 4 hours ago | parent | prev | next [-]

So making CSAM of real people is ok if an AI is involved?

athrowaway3z 4 hours ago | parent | prev | next [-]

You're defending X/Grok as if it's a public social platform.

It is a privately controlled public-facing group chat. Being a chat-medium does not grant you the same rights as being a person. France isn't America.

If a company operates to the detriment and against the values of a nation, e.g. not paying their taxes or littering in the environment, the nation will ask them to change their behavior.

If there is a conspiracy of contempt, at some point things escalate.

saubeidl 3 hours ago | parent | prev | next [-]

[dead]

joe_mamba 5 hours ago | parent | prev [-]

I'm in the same boat. We have literally pedos and child abusers in the epstein files talking openly about doing despicable things to women, kids and even babies, while authorities are focused on criminalizing generating images of fake minors that don't exist and that any other LLM platform can do if asked.

Plus, how do you even judge the age of AI generated fake people to say it's CP? Reminds me when UK activists were claiming Grok's anime girl avatar was a minor and deserved to be considered CP, when she had massive tits that no kid has. So how much of this is just a political witch-hunt looking for any reason to justify itself?

n4r9 4 hours ago | parent | next [-]

You want the French authorities to focus on the Epstein files to the exclusion of all other corporate misbehaviour?

Also, it seems pretty likely that Musk is tangled up with the Epstein shit. First Musk claimed he turned down offer to go to the island. Now it turns out Musk repeatedly sought to visit, including wanting to know when the "wildest" party was happening, after Epstein was already known as a child sex abuser. Musk claimed that Epstein had never been given a tour of SpaceX but it turns out he did in 2013. It's the classic narcissistic "lie for as long as possible" behaviour. Will be interesting to see what happens as more is revealed.

joe_mamba 3 hours ago | parent [-]

>You want the French authorities to focus on the Epstein files to the exclusion of all other corporate misbehaviour?

No i said no such thing, what I said was that the resources of authorities is a finite pie. If most of it goes towards petty stuff like corporate misbehavior that hurts nobody, there won't be enough for the grave crimes like actual child abuser that actually hurt real people.

Same how police won't bother with your stolen phone/bike because they have bigger crimes to catch. I'm asking for the same logic be applied here.

n4r9 3 hours ago | parent | next [-]

There's no indication that this investigation would draw resources away from investigating the Epstein files. It's happening in France, for starters, whilst the vast majority of Epstein's crimes appear to have happened in US territories. Speaking about "the authorities" as if they're a unified global entity sounds a little conspiratorial.

watwut 2 hours ago | parent | prev [-]

> If most of it goes towards petty stuff like corporate misbehavior that hurts nobody, there won't be enough for the grave crimes like actual child abuser that actually hurt real people.

1.) That is not how it works, even if we ignore the fact that France is not USA.

2.) Lack of resources was not the issue with Epstein prosecution. The prosecutor was literally told to not investigate by her superiors who were trying to stop the case. She was told she is unsubordinated for doing it. Acosta giving Epstein sweetheart deal or seeking to stop the prosecutor is not the resources issue.

It is billionaires (Thiel, Musk, Gates), politicians (Clinton, Luthnic ) media darlings (Summers, Kraus and the rest of sexism is totally not a thing anymore crowd literally partying with Epstein) are to be protected at all cost issue. Even now, people implicated in Epstein files are still getting influential positions with explicit "it would be cancel culture to not give these people more influence" argument.

amelius 4 hours ago | parent | prev | next [-]

I think the reasoning is that the AI contributes to more epsteins. In some way.

terminalshort 3 hours ago | parent | next [-]

That isn't reasoning, it's wild speculation

amelius 2 hours ago | parent [-]

I seem to remember there was research behind this, but I'm not sure.

joe_mamba 4 hours ago | parent | prev [-]

How?

THat's like the 1993 moral panic that video games like Doom cause mass shootings, or the 1980's mass panic that metal music causes satanist, or the 1950s moral panic that superhero comic book violence leads to juvenile delinquency. Politicians are constantly looking for an external made up enemy to divert attention to from the real problems.

People like Epstein and mass woman/child exploitation have existed for thousands of years in the past, and will exist thousands of years in the future. It's part of the nature of the rich and powerful to execute on their deranged fetishes, it's been documented in writing since at least the Roman and Ottoman empires.

Hell, I can guarantee you there's other Epsteins operating in the wild right now, that we haven't heard of (yet), it's not like he was in any way unique. I can also guarantee you that 1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could.

KaiserPro 4 hours ago | parent | next [-]

> THat's like the 1993 moral panic that video games like Doom cause mass shootings,

Apart from doom wasn't producing illegal content.

the point is that grok is generating illegal content for those jurisdictions. In france you can't generate CSAM, in the UK you can't distribute CSAM. Those are actual laws with legal tests, none of them need to be of actual people, they just need to depict _children_ to be illegal.

Moral panics require new laws to enforce, generally. This is just enforcing already existing laws.

More over, had it been any other site, it would have been totally shut down by now and the servers impounded. Its only because musk is close to trump and rich that he's escaped the fate than you or I would have had if we'd done the same.

joe_mamba 3 hours ago | parent [-]

>Apart from doom wasn't producing illegal content.

Sure but where's the proof that Grok is actually producing illegal content? I searched for news sources, but they're just all parroting empty accusations not concrete documented cases.

pasc1878 2 hours ago | parent | next [-]

See https://www.bbc.co.uk/news/articles/cvg1mzlryxeo

Note that IWF is not a random charity it works with the Police on these matters.

I found this as the first item in Kagi search - perhaps you should try non AI searches

KaiserPro 2 hours ago | parent | prev [-]

> but they're just all parroting empty accusations not concrete documented cases.

In the UK it is illegal to create, distribute and store CSAM. A news site printing a photo CSAM would make them legally up the shitter.

However, the IWF, who are tasked with detecting this stuff have claimed to have found evidence of it, along with multiple other sources, Ofcom who are nominally supposed to police this have an open investigation, so do the irish police.

The point is, law has a higher threshold of proof than news, which takes time. If there is enough evidence, then a court case (or other instrument) will be invoked.

amelius 4 hours ago | parent | prev | next [-]

Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.

joe_mamba 4 hours ago | parent [-]

Oh yeah, because the main reason why EPstein and his guests got away with it for so long, is because there was so much low hanging CP out there confusing authorities and prosecutors, not because of the corruption, cronyism and political protection they enjoyed at the highest levels of government.

Do you guys even hear yourselves?

amelius 4 hours ago | parent [-]

But how about the "1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could."

Some of those might still try.

joe_mamba 3 hours ago | parent [-]

>Some of those might still try.

And what does AI have to do with this? Haven't child predators existed before AI?

Where's the proof that AI produces more child predators?

You're just going in circles without any arguments.

amelius 3 hours ago | parent [-]

It has to do with AI because:

> Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.

(own quote)

Yes, the predators existed before AI, but also:

> I think the reasoning is that the AI contributes to more offenders (edited).

(own quote, edited)

To be clear, I don't think this line of reasoning is entirely convincing, but apparently some people do.

watwut 2 hours ago | parent | prev [-]

No, 20% of population is not seeking to abuse children nor teens. If you think so, you are moving in weird circles. In fact, what we also have are people who noped out of Epstein circle or even openly criticized it for years.

Also, framing the issue of sexual abuse by untouchable issue as the same as superhero comic issue (which itself was not just about superhero comic and you should know it) is spectacularly bad faith.

Yes, there were always people who were stealing, abusing, murdering for own gain and fun. That is not an argument for why we should accept and support it as normalized state of world. It is a good reason to prevent people from becoming too powerful and for building accountable institutions able to catch and punish them.

ilogik 4 hours ago | parent | prev | next [-]

The UK is also opening investigations into the Esptein stuff.

https://www.reuters.com/world/uk/starmers-government-aids-po...

Unlike the US administration which seems to be fine with what epstein and X are doing

joe_mamba 4 hours ago | parent | next [-]

Is the UK investigating them how they investigated Prince Andrew and the infamous grooming gangs?

owebmaster 3 hours ago | parent | next [-]

I have never seen someone put so much effort to defend child pornography.

SiempreViernes 4 hours ago | parent | prev [-]

What's this comment about? Do you think no other CSAM distribution should be investigated until the stuff in Epstein files is sorted?

GordonS 4 hours ago | parent | prev [-]

Except Starmer is making sure that the "investigation" is hobbled - anything seemed important to "national security" will be excluded!

The UK's "investigation" is a farce.

owebmaster 3 hours ago | parent | prev | next [-]

The same guy responsible for creating child porn that you are defending is also in the Epstein's list. Also, don't abbreviate child pornography, it shows you have a side on this

cess11 4 hours ago | parent | prev [-]

"Grok" is part of the Epstein network, connected through Elon Musk.