▲ | judge123 2 days ago | |
This is horrifying, but I feel like we're focusing on the wrong thing. The AI wasn't the cause; it was a horrifying amplifier. The real tragedy here is that a man was so isolated he turned to a chatbot for validation in the first place. | ||
▲ | AIPedant 2 days ago | parent | next [-] | |
I don't think "so isolated he turned to a chatbot for validation" describes this, or why people get unhealthily attached to chatbots. 1) The man became severely mentally ill in middle age, and he lived with his mother because he couldn't take care of himself. Describing him as merely "isolated" makes me wonder if you read the article: meeting new friends was not going to help him very much because he was not capable of maintaining those friendships. 2) Saying people turn to chatbots because of isolation is like saying they turn to drugs because of depression. In many cases that's how it started. But people get addicted to chatbots because they are to social interaction what narcotics are to happiness: in the short term you get all of the pleasure without doing any of the work. Human friends insist on give-and-take, chatbots are all give-give-give. This man didn't talk to chatbots because he was lonely. He did so because he was totally disconnected from reality, and actual human beings don't indulge delusions with endless patience and encouragement the way ChatGPT does. His case is extreme but "people tell me I'm stupid or crazy, ChatGPT says I'm right" is becoming a common theme on social media. It is precisely why LLMs are so addictive and so dangerous. | ||
▲ | mediumsmart 2 days ago | parent | prev | next [-] | |
Most of us are so isolated that we will turn to validation from bubble brothers sharing our view which is a real tragedy, yes. It may be horrifying but a horrifying normal at that. | ||
▲ | strogonoff 2 days ago | parent | prev [-] | |
In a technical sense, no technology is ever the cause of anything: at the end of the day, humans are the cause. However, technology often unlocks scale, and at some point quantity becomes quality, and I believe that is usually implied when it is said that technology “causes” something. For example, cryptocurrency and tumblers are not themselves the cause of scams. Scams are a result of a malevolent side of human nature; a result of mental health issues, insecurity and hatred, oppression, etc., whereas cryptocurrencies, as many people are keen to point out, are just like cash, only digital. However, one of the core qualities of cash is that it is unwieldy and very difficult to move in big amounts. Cash would not allow criminals to casually steal a billion USD in one go, or ransomware a dozen of hospitals, causing deaths, subsequently washing the proceeds and maintaining plausible deniability throughout. Removing a constraint on cash makes it a new thing qualitatively. Is there a benefit from it? Sure. However, can we say it caused (see above) a wave of crime? I think so. Similarly, if there has been a widespread problem of mental health issues for a while, but now people are enabled to “address” these issues by themselves—at humongous scale, worldwide—of course it will be possible to say LLMs would not be the cause of whatever mayhem ensues; but wouldn’t they? Consider that it used to be that physical constraints made any individual worldview necessarily be tempered and averaged out by surrounding society. If someone had a weird obsession with murdering innocent people, they would not be able to find like-minded people to encourage them very easily (unless they happen to be in a localized cult) to sustain this obsession and transform it. Then, at some point, the Internet and social media made it easy, for someone who might have otherwise been a pariah or forced to adjust, to find like-minded people (or just people who want to see the world burn) right in their bedrooms and basements, for better and for worse. Now, a new variety of essentially fancy non-deterministic autocomplete, equipped with enough context to finely tailor its output to each individual, enables us to fool ourselves into thinking that we are speaking to a human-like consciousness—meaning that to fuel one’s weird obsession, no matter how left field, one does not have to find a real human at all. Humans are social creatures, we model ourselves and become self-aware through other people. As chatbots are becoming normalized and humans want to talk to each other less, we (not individually, but at societal scale) are increasingly at the mercy of how an LLMs happen to (mal)function. In theory, they could heal society at scale as well, but even if we imagine there were no technical limitations preventing that, sadly practice is more likely to show selfish interests prevail and be amplified. |