Remix.run Logo
SlinkyOnStairs 3 hours ago

> How else do you want companies to remove and prevent CSAM?

Different situation.

Facebook has to do CSAM moderation because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.

And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.

---

In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.

---

This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to. (But then they also shouldn't be outsourcing this and traumatizing contract workers for $2-3 an hour)

But OpenAI is not such a firm. It's a general AI company.

GrinningFool 2 hours ago | parent | next [-]

> traumatizing contract workers for $2-3 an hour)

Is there an hourly rate at which this should be acceptable?

arw0n 44 minutes ago | parent | next [-]

There is labor that is necessary for our societies to function, but a direct threat to the people doing the work. Someone has to do it, and it should be seen as a great service to society and rewarded accordingly. In a just world, we would be paying significantly extra for threats to health that come from work, in the one we are currently in we use threat of worse harm instead.

SlinkyOnStairs 2 hours ago | parent | prev | next [-]

There's no dollar amount but proper support during and after employment is a minimum, and a large paycheque will both offset some of the human cost and make it easier for people to be pushed to quit the job; Such that they aren't doing the job for too long.

The current support systems for police in this subject are already insufficient. Facebook's treatment of their moderation staff is abhorrent. The point of including the pay figure is to further illustrate just how damning this subcontracting practice is.

bonesss an hour ago | parent | prev | next [-]

We have coal miners destroying their bodies and lungs, cobalt mining slavery, cocoa nut child labour and de facto slavery, sex workers, CPS investigators, first responders, and doctors with high rates of suicide…

Not only is there an acceptable market rate for trauma, it’s sometimes competitive and requires licensing.

genewitch 2 hours ago | parent | prev [-]

Emergency Department^ doctors, what do they make? give people who have to review the worst humanity has to offer and pay them that. and while we're at it, ambulance personnel should get a huge pay bump. Take it from nurses' pay.

^ i originally said "triage doctors" but i meant the resident ER doc.

jdiff 2 hours ago | parent | next [-]

Why take from other workers when it can be siphoned from upper management and shareholders?

genewitch an hour ago | parent [-]

you're right, it's a personal failing that i must snip at nurses whenever the word appears in my head. Apologies.

harvey9 44 minutes ago | parent | prev [-]

ER triage is usually done by a nurse, at least in England.

deaux 2 hours ago | parent | prev | next [-]

> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

This is of course incredibly illegal, but megacorps (by valuation) and oligarchy members are above the law so who cares. I assume there could be a regulatory framework which can make this legal for an extremely specific purpose, but there is zero change that OpenAI was part of this/abiding by this in 2022, absolutely none.

BobbyJo an hour ago | parent | prev | next [-]

> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

You've just thrown the garbage over your fence. Instead of OpenAI contracting Sama to classify CSAM, the "Curators" have to.

At the end of the day, someone needs to classify it. If you say the platforms need to, and they miss some, and it ends up in OAI training data, OAI is going to be the entity paying the prices.

fragmede 2 hours ago | parent | prev [-]

OpenAI runs ChatGPT where users submit text and photos and OpenAI generates and sends text and photos back. So users could be submitting CSAM. And yes, OpenAI could be generating CSAM. It's not limited to being a pull operation. What am I missing?