| ▲ | bigbadfeline 9 hours ago | |
> It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology. I would insist that the deployments of a technology should be disconnected from the technology itself - I criticize AI too, and I get a lot of downvotes for it, but I try to separate the science of AI from its economics and politics. The harms of AI and other technologies come from two sources 1. Capital destroying market bubbles and 2. Deployments motivated and enabled by political and moral corruption. Both of these are in turn enabled and sustained by legislation. That is, we have to talk politics, not technology and not AI. AI has a great potential - both for improving human life and for making it a lot worse and which way it goes depends entirely on politics. If we fail to cleanly separate these issues and keep moralizing about technology, we will be chasing red herrings and bumping heads in the dark all the while the tech is being deployed against us. | ||