▲ | strogonoff 2 days ago | |
In a technical sense, no technology is ever the cause of anything: at the end of the day, humans are the cause. However, technology often unlocks scale, and at some point quantity becomes quality, and I believe that is usually implied when it is said that technology “causes” something. For example, cryptocurrency and tumblers are not themselves the cause of scams. Scams are a result of a malevolent side of human nature; a result of mental health issues, insecurity and hatred, oppression, etc., whereas cryptocurrencies, as many people are keen to point out, are just like cash, only digital. However, one of the core qualities of cash is that it is unwieldy and very difficult to move in big amounts. Cash would not allow criminals to casually steal a billion USD in one go, or ransomware a dozen of hospitals, causing deaths, subsequently washing the proceeds and maintaining plausible deniability throughout. Removing a constraint on cash makes it a new thing qualitatively. Is there a benefit from it? Sure. However, can we say it caused (see above) a wave of crime? I think so. Similarly, if there has been a widespread problem of mental health issues for a while, but now people are enabled to “address” these issues by themselves—at humongous scale, worldwide—of course it will be possible to say LLMs would not be the cause of whatever mayhem ensues; but wouldn’t they? Consider that it used to be that physical constraints made any individual worldview necessarily be tempered and averaged out by surrounding society. If someone had a weird obsession with murdering innocent people, they would not be able to find like-minded people to encourage them very easily (unless they happen to be in a localized cult) to sustain this obsession and transform it. Then, at some point, the Internet and social media made it easy, for someone who might have otherwise been a pariah or forced to adjust, to find like-minded people (or just people who want to see the world burn) right in their bedrooms and basements, for better and for worse. Now, a new variety of essentially fancy non-deterministic autocomplete, equipped with enough context to finely tailor its output to each individual, enables us to fool ourselves into thinking that we are speaking to a human-like consciousness—meaning that to fuel one’s weird obsession, no matter how left field, one does not have to find a real human at all. Humans are social creatures, we model ourselves and become self-aware through other people. As chatbots are becoming normalized and humans want to talk to each other less, we (not individually, but at societal scale) are increasingly at the mercy of how an LLMs happen to (mal)function. In theory, they could heal society at scale as well, but even if we imagine there were no technical limitations preventing that, sadly practice is more likely to show selfish interests prevail and be amplified. |