Remix.run Logo
SlinkyOnStairs 4 hours ago

"Fun" bonus fact: This isn't the first time Sama (the outsourcing company) has had these problems.

OpenAI had them classify CSAM, so Sama fired them as a client back in 2022. https://time.com/6247678/openai-chatgpt-kenya-workers/

We're 4 years on, 3 years since that report broke. Not a single thing has improved about how tech companies operate.

prepend 3 hours ago | parent | next [-]

How else do you want companies to remove and prevent CSAM? It seems like you must have some human involvement to train and monitor.

It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).

SlinkyOnStairs 3 hours ago | parent | next [-]

> How else do you want companies to remove and prevent CSAM?

Different situation.

Facebook has to do CSAM moderation because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.

And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.

---

In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.

---

This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to. (But then they also shouldn't be outsourcing this and traumatizing contract workers for $2-3 an hour)

But OpenAI is not such a firm. It's a general AI company.

GrinningFool 2 hours ago | parent | next [-]

> traumatizing contract workers for $2-3 an hour)

Is there an hourly rate at which this should be acceptable?

arw0n 43 minutes ago | parent | next [-]

There is labor that is necessary for our societies to function, but a direct threat to the people doing the work. Someone has to do it, and it should be seen as a great service to society and rewarded accordingly. In a just world, we would be paying significantly extra for threats to health that come from work, in the one we are currently in we use threat of worse harm instead.

SlinkyOnStairs 2 hours ago | parent | prev | next [-]

There's no dollar amount but proper support during and after employment is a minimum, and a large paycheque will both offset some of the human cost and make it easier for people to be pushed to quit the job; Such that they aren't doing the job for too long.

The current support systems for police in this subject are already insufficient. Facebook's treatment of their moderation staff is abhorrent. The point of including the pay figure is to further illustrate just how damning this subcontracting practice is.

bonesss an hour ago | parent | prev | next [-]

We have coal miners destroying their bodies and lungs, cobalt mining slavery, cocoa nut child labour and de facto slavery, sex workers, CPS investigators, first responders, and doctors with high rates of suicide…

Not only is there an acceptable market rate for trauma, it’s sometimes competitive and requires licensing.

genewitch 2 hours ago | parent | prev [-]

Emergency Department^ doctors, what do they make? give people who have to review the worst humanity has to offer and pay them that. and while we're at it, ambulance personnel should get a huge pay bump. Take it from nurses' pay.

^ i originally said "triage doctors" but i meant the resident ER doc.

jdiff 2 hours ago | parent | next [-]

Why take from other workers when it can be siphoned from upper management and shareholders?

genewitch an hour ago | parent [-]

you're right, it's a personal failing that i must snip at nurses whenever the word appears in my head. Apologies.

harvey9 42 minutes ago | parent | prev [-]

ER triage is usually done by a nurse, at least in England.

deaux 2 hours ago | parent | prev | next [-]

> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

This is of course incredibly illegal, but megacorps (by valuation) and oligarchy members are above the law so who cares. I assume there could be a regulatory framework which can make this legal for an extremely specific purpose, but there is zero change that OpenAI was part of this/abiding by this in 2022, absolutely none.

BobbyJo an hour ago | parent | prev | next [-]

> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

You've just thrown the garbage over your fence. Instead of OpenAI contracting Sama to classify CSAM, the "Curators" have to.

At the end of the day, someone needs to classify it. If you say the platforms need to, and they miss some, and it ends up in OAI training data, OAI is going to be the entity paying the prices.

fragmede 2 hours ago | parent | prev [-]

OpenAI runs ChatGPT where users submit text and photos and OpenAI generates and sends text and photos back. So users could be submitting CSAM. And yes, OpenAI could be generating CSAM. It's not limited to being a pull operation. What am I missing?

abdullahkhalids 3 hours ago | parent | prev | next [-]

CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.

The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.

Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.

scarmig 2 hours ago | parent | next [-]

Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies. The abusers just have to find the weakest link, and that weakest link will have fewer resources than multi trillion dollar companies. You would also likely not hear many news stories about it, because they won't have the expertise to even detect it.

That's a tradeoff you can choose to make, but you need to enter into it with open eyes.

camgunz an hour ago | parent | next [-]

This isn't an either or. X isn't the only place CSAM is, there are gazillions of other sources. It I'd probably the easiest place to find it tho.

freejazz an hour ago | parent | prev [-]

>That's a tradeoff you can choose to make, but you need to enter into it with open eyes.

No it's not. It's certainly not my choice. No one asked me if it's okay for Facebook to distribute CSAM because you insist it would be worse if it didn't.

scarmig an hour ago | parent [-]

I don't really care if you classify it as a choice or not. One set of actions results in more CSAM than others. Just because you don't like the implication of there being tradeoffs doesn't mean there aren't tradeoffs.

freejazz an hour ago | parent [-]

You classified it as a choice, not me.

scarmig an hour ago | parent [-]

> or not

devilbunny 3 hours ago | parent | prev | next [-]

> Server moderators should be legally responsible for content on their server.

And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.

Child abusers are twisted people, and I really don’t care much what happens to them, but making it impossible for them to use the internet means sterilizing the whole thing.

prmoustache an hour ago | parent | next [-]

>And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.

This is already the case. There is a lot of lawful, useful, medical or educational content that is actively censured on social medias because they include words or pictures of organs while same social medias actively encourage and develop algorithm to push underage girls (and possibly boys) posting pictures of themselves in sexual poses, attires and context.

Big tech and social media networks love and push CSAM, they just hide the genitals but the content really is the same.

devilbunny 17 minutes ago | parent [-]

> a lot of lawful, useful, medical or educational content

Like what? It’s all there on Wikipedia, and for all of Wiki’s faults, I have trouble imagining what kind of useful, educational, medical information you will find on social media that is better than that.

abdullahkhalids 2 hours ago | parent | prev [-]

You are just saying that physical life doesn't function. People get banned or removed from all sorts of informal and formal groups all the time because of completely illegitimate reasons. That's just human politics embedded so deeply in our psychology it will never go away. They simply move to different groups - and similarly online they can move to a different federated server.

But that's not possible in today's oligopoly of social media. An invisible algorithm will ban you, and there is no way back, and few alternates. Big Social Media is way worse from a sanitizing perspective than some federated social media.

devilbunny 2 hours ago | parent [-]

I have no deep problem with exclusion; as you say, that’s human nature and unfixable. Making mods personally legally liable for everything that appears on their board is just insane. How many minutes are acceptable for them to see and review content? Or does everything have to be pre-approved?

I know a local blog that pre-approves every comment. He lets a lot of stuff through, because he lets people be dumbasses. If he were personally liable, the conversation would get a lot quieter.

haritha-j 3 hours ago | parent | prev | next [-]

Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.

abdullahkhalids 2 hours ago | parent | next [-]

No. It's a legitimately difficult problem because there not all naked pictures of kids are illegal. The false positive problem is bad for business, but also generally bad even if the big social media was benevolent.

Moderators need to actually understand the context of the picture/video, which requires knowledge of culture and language of the people sharing the pictures. It's really difficult to do that without hiring moderators from every culture in the world.

But small federated servers can often align along real world human social networks, so it's easier for the server admin to understand what should be removed.

red_admiral an hour ago | parent | prev | next [-]

The amount of CSAM online is completely out of control. There's already nation-level and sometimes international cooperation to catch any known images with perceptual hashing (think: the opposite of cryptographic hashing) as well as other automated and manual tools.

My impression is it would take Manhattan-Project levels of effort and funds to come close to "solving" this problem, especially without someone getting on a watchlist for having a telehealth-first primary care provider insurace plan and asking for advice on their toddler's chickenpox.

Human review? Meta has small armies worth of content moderators already that tend to burn out with psychological problems and have a suicide rate where you're probably better off going to fight in a real war. (This includes workers hired by Sama in Kenya, to link back to the OP.)

I will reluctantly grant Meta that they're up against a really hard problem here.

freejazz an hour ago | parent [-]

>I will reluctantly grant Meta that they're up against a really hard problem here.

It is a problem of their own making.

GrinningFool 2 hours ago | parent | prev [-]

Isn't this more about disincentivizing the posting of it in the first place by increasing the chances of getting banned? Once you have to remove it, it's too late.

Aurornis 2 hours ago | parent | prev | next [-]

> Server moderators should be legally responsible for content on their server.

So if you want to send someone to jail, just talk your way into joining their server, upload some illegal content, and report them for it?

> Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.

Why would someone join a server with active moderation if they wanted to share CSAM with their social media friends?

They would seek out one of those servers that was set up specifically for those groups, where it was known to be a safe space.

This is what many people don't get about federated networks: The people in those little servers DGAF if you block them. They want to be surrounded by their likeminded friends away from the rules of some bigger service like Facebook or Twitter. Federated social media is the perfect platform for them because they can find someone who set up a server in some other country with their own idea of rules and join that, not be subject to the regulations of mainstream social media.

genewitch 2 hours ago | parent [-]

right, and you have other users on fediverse that notice that server leaking, and if the content is bad enough, report the service to an authority. Having all of the pedophiles and other creeps on a tiny subset of servers, isloated islands of them; well, that ought make enforcement easier.

It also makes it relatively easy to avoid, as server admins share blocklists. I know a dozen servers offhand that i'd block if i ran another fediverse server.

Fosstodon fediverse server doesn't have this issue, for example.

I replied this way because the way you wrote it, it sounds like an indictment of a system that's designed to avoid advertisers getting user profiles, over all else.

The problem is the people who participate in this (the illegal and immoral), and not "the network."

2ndorderthought 3 hours ago | parent | prev | next [-]

Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.

esyir 3 hours ago | parent [-]

If car manufacturers cannot bring car related deaths to zero, they too should no longer be legitimate companies.

lokar 3 hours ago | parent | next [-]

A better comparison would be that if a car company can’t meet preexisting crash/safety standards, they need to shut down.

These are pretty clear laws established by a democratic government with a pretty good record for rule of law.

esyir 3 hours ago | parent [-]

Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.

The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.

2ndorderthought 3 hours ago | parent | prev | next [-]

Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?

Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?

esyir 3 hours ago | parent [-]

We're talking about CSAM right? Which all platforms remove proactively, build models to remove and essentially always respond to when informed.

Demanding some perfect immediate magic response there is the equivalent of asking car manufacturers to prevent all deaths.

2ndorderthought 2 hours ago | parent [-]

Do they remove it and respond really though?

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,

https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html

https://www.zdnet.com/article/us-ai-safety-institute-will-be...

https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok...

The list of not even trying anymore goes on and on. Mechahitler was also fun

_DeadFred_ 3 hours ago | parent | prev [-]

When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.

So yes, yes, let's do like we did with cars.

genewitch 2 hours ago | parent [-]

I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.

Yokohiii 2 hours ago | parent | prev | next [-]

I am not sold on the federated thing to solve CSAM or similar issues.

Actually companies should be bullied about privacy and copyright so they are unable to share any contents at a scale with 3rd parties. Thus they have to solve it on their own and forced to realize their business model is shit.

Barrin92 2 hours ago | parent | prev | next [-]

>CSAM on social media will be 100x suppressed because banning people is way easier on small servers.

No it isn't. Small servers often don't have paid security or moderation, are run in anonymous fashion, and have no profit motive that can even be used to incentivize them against hosting illegal content.

That's visible when it comes to porn. There's a million bootleg porn sites on the internet hosted that show off illegal content. The only site that was ever forced to curate its content was Pornhub, because they're sufficiently large, work in a jurisdiction that has laws and can be held accountable. From a content moderation standpoint going after a million web forums is an absolute pain in the ass compared to going after Facebook.

Which is the first argument any decentralization advocate always brings up (and they're correct to do so), censorship is harder and evasion of law enforcement easier when dealing with a network of independent actors.

red_admiral an hour ago | parent [-]

What stops Humbert Humbert from joining hundreds of small servers?

You now have 100x the total human effort for mods to review and ban him.

devmor 3 hours ago | parent | prev | next [-]

The one thing I will throw out here that I can add to this conversation is that I think the government simply does not care, either. It's mainly only in regard to mass public outrage, or when someone is a political target that it gets dealt with from a law enforcement level.

Anecdotally, when I was a young adult I was a volunteer moderator for a large forum. We got reports of CSAM several times a month and had a process for escalating and reporting it to the FBI IC3 - we retained a lot of information about the users that posted it.

One of the administrators of the website mentioned to me that over the years since the inception of the forum, they'd reported almost a thousand incidents of CSAM distribution - and the FBI followed up with them to get information less than 10 times in total.

devilbunny 2 hours ago | parent [-]

That seems reasonable though. The FBI isn’t interested in busting one perv in a closet, they want the ones making the stuff.

nozzlegear 38 minutes ago | parent [-]

The FBI is interested in busting perverts in closets. That's often how they work their way up the "supply chain" when it comes to CSAM. Consumers lead them to distributors, who lead them to producers.

devilbunny 29 minutes ago | parent [-]

A fair point. But it still seems reasonable that only about 1% of suspect posts lead to a formal inquiry. Doesn’t mean they aren’t taking the report into account. You have to figure that they already have leads on most of them.

muglug 3 hours ago | parent | prev [-]

> Banning people is way easier on small servers

Big “citation needed” here. My bet is that Meta have far better moderation systems than any other social media company on the planet.

genewitch 2 hours ago | parent [-]

when i ran a fediverse server for myself and 3 people, but allowed public signups if someone came by; it was very easy to ban people, and very easy to null-route entire swaths of the fediverse, because i didn't want their content on my service.

That's more what i got from that pull-quote. I know a company that has hundreds of individual forums, and those are all moderated quickly and correctly (last i heard). They're moderated so effectively they often get DDoS by Russian IPs for banning users for scam posts from that country.

Yokohiii 3 hours ago | parent | prev | next [-]

These workers prepare data for AI. I don't think the need for them will go away anyway soon.

Westeners are too expensive and unwilling to do it. AI is a business model that requires poverty and extreme inequality to function. Yes other businesses do that too, but they don't claim it's a solution to everything while it actually has very special human requirements.

freejazz an hour ago | parent | prev | next [-]

I don't understand why their size is an excuse for them to not remove and prevent CSAM.

IncreasePosts 3 hours ago | parent | prev [-]

Couldn't you just use multiple classifiers? Like "is a minor" classifier coupled with "is sexual content" classifier?

superfrank 2 hours ago | parent [-]

How would you test that that works?

deaux 2 hours ago | parent | prev | next [-]

> Sama (the outsourcing company)

If script writers gave the company this name in a fictionalization it would be rejected as too on the nose.

cyanydeez 4 hours ago | parent | prev [-]

Isn't it more that tech companies are just more high profile and integral to political and social landscape than older companies; but reviewing the current political zeitgeist, they're in lockstep to what some, if not all, would just call fascism?

2ndorderthought 3 hours ago | parent | next [-]

They are literal defense and offense contractors. They hang out at the Pentagon. They sell political data to sway elections. They give gifts to leaders for favors. It is technofacism.

intended 3 hours ago | parent | prev | next [-]

Yes and no.

Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.

I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.

I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.

I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.

Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.

SlinkyOnStairs 3 hours ago | parent | prev | next [-]

Companies of the 20th century certainly weren't more ethical. (Though a few select tech companies seem to be intent on proving the opposite.)

But it's not really a fascism thing. While fascism does love the oppression of women, and the current crop of fascists have a notable connection to the Epstein case, this is a lot more boring.

Sam Altman's not a fascist, he's a wet noodle who sucks up to the Trump administration for money. He's not even good at it. The way his company handled CSAM does cast aspersions on Altman & the accusations from his sister, but all other evidence suggests he's just a moron acting recklessly. Not identifying the problem ahead of time, and acting poorly in response.

In the case of Meta. We know who Zuckerberg is. The company got it's start as, in crude terms, a sex pest website. The original "Facemash" website forcibly taken down by Harvard. This is not some new consequence of this turn to fascism, Zuckerberg's always been like this, and the actions taken against him were clearly not enough to avoid the company culture following his precedent.

deaux 2 hours ago | parent [-]

> Companies of the 20th century certainly weren't more ethical.

Disagree, not on average. There was a non-trivially higher % of decisions made based on "what's good for the customer" or "what's good for the product" or "I would be ashamed to do this" and a lower % of decisions made based on "what maximizes profit in the next quarter". I think that is more ethical. To take it to an extreme, using slave labor because it's good for the customer is more ethical than using slave labor to maximize profit in the next quarter.

inquirerGeneral 3 hours ago | parent | prev [-]

[dead]