Remix.run Logo
mjr00 5 hours ago

This is touched upon in the article:

> Last year, OpenAI released estimates on the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.

> The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs.

0.07% doesn't sound like much, but ChatGPT has about a billion WAU, which means -seventy million- 700,000 people per week.

onion2k 3 hours ago | parent | next [-]

Is that different to the number of people who have that going on in their life even without AI though? If it's 0.01% outside of AI, and 0.07% of AI users, then either AI attracts people with those conditions or AI increases the likelihood of having them. That's worth studying.

It's also possible that 0.1% of people have them and AI is actually reducing the number of cases...

thewebguyd 2 hours ago | parent [-]

For the US it's estimated to be about 23% of the population that have a mental illness, and WHO says 12-15% globally or about 1 in 8 people. About 14% of the global population experience suicidal ideation at some point in time. That rate increases for adolescents and young adults, up to 22%.

I'd be interested in such a study, but OTOH mental illness conditions being present in nearly a quarter of the world, I'm surprised there haven't been more incidents like this (unless there have been, and they just haven't been reported by the news).

3eb7988a1663 an hour ago | parent [-]

If the estimate is 1/5 people are mentally ill, the definition needs some readjustment. That is such an inclusive number that it must be counting otherwise fine people who....like to count their tic tacs so get labelled as slightly OCD. Had a bummer of a day, so I am prone to depression?

There was a recent study about 99% of people have an abnormal shoulder: https://news.ycombinator.com/item?id=47064944 . We are all unique in our own way, but labeling everyone as ill does not seem productive.

thewebguyd 42 minutes ago | parent [-]

Clinical diagnoses of the various mental illness disorders require functional impairment in (usually, but not always) multiple areas of life: school, work, community, legal, self care, etc.

An abnormality that doesn't cause functional impairment, like that link, is different from a mental illness that does. I'd agree with you, if something is that prevalent then it ceases to be a "disorder" and is simply just pathologizing being human.

But, the 23% statistic refers to people that meet that diagnostic criteria of clinically significant distress or impairment.

I'll acknowledge that diagnostic creep may be a real issue, but just because a condition is common doesn't mean it's not an illness that causes impairment in daily life. 50% of adults have have high blood pressure, but we don't change our meaning of "healthy" to include those with high blood pressure because if left unchecked it can have serious outcomes.

The high numbers might not suggest the definition is broken, but rather that our modern environment is particularly taxing on human psychology

sd9 5 hours ago | parent | prev | next [-]

700,000

Still, a lot

mjr00 5 hours ago | parent [-]

Whoops yes, thank you. Too much LLM usage has made me start doing math about as well as them.

avaer 5 hours ago | parent | prev [-]

That number terrifies me not because it is so high, but because it exists.

What is stopping an entity (corporate, government, or otherwise) from using a prompt to make sweeping decisions about whether people are mentally or otherwise "fit" for something based on AI usage? Clearly not the technology.

I'm not saying mental health problems don't exist, but using AI to compute it freaks me out.

elevation 4 hours ago | parent | next [-]

A rational lender increases interest rates when prospective borrowers are less likely to be around to pay the bill. Confiding in an LLM that is integrated with a consumer tracking apparatus is a great way to ruin your life.

autoexec 4 hours ago | parent | prev [-]

We could already use social media posts to detect mental illness, by admission as people talk openly about their diagnosis, but also by analysis of the content/tone/frequency of their posts that don't mention mental illness.

Data brokers already compile lists of people with mental illness so that they can be targeted by advertisers and anyone else willing to pay. Not only are they targeted, but they can get ads/suggestions/scams pushed at them during specific times such as when it looks like they're entering a manic phase, or when it's more likely that their meds might be wearing off. Even before chatbots came into the mix, algorithms were already being used to drive us toward a dystopian future.