Remix.run Logo
charcircuit 3 days ago

>We’re building an age-prediction system to estimate age based on how people use ChatGPT.

>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.

This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.

voakbasda 3 days ago | parent [-]

This is why one should never say anything sensitive to a cloud-hosted AI.

Local models and open source tooling are the only means of privacy.

SoftTalker 3 days ago | parent | next [-]

Same goes for doctors, therapists, lawyers, etc. then. They all ultimately have the responsibility to involve authorities if someone is expressing evidence of imminent harm to himself or others.

godshatter 3 days ago | parent | prev [-]

Yep, I'll be using something like gpt4all and running things locally just so I don't get caught up in something by some online AI calling the authorities on me. I don't plan to talk about anything anyone would be concerned about, but I don't trust these things to get nuance.