| ▲ | dpatterbee 10 hours ago | ||||||||||||||||||||||||||||
I think the point is that regardless of what benefits LLMs are bringing to the table, there are a list of downsides that Drew views as non-negotiables. It doesn't matter what other people are seeing, because he sees a fundamental issue underlying the entire premise. It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology. | |||||||||||||||||||||||||||||
| ▲ | MisterTea 8 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
> It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology. I feel that the people who are completely ignoring the harms are the ones who need and/or benefit from it and do whatever it takes to justify their use of it. The rest are people who understand the harms and minimize interaction followed by the blissfully ignorant. I was just talking to a content creator who uses AI at work social media platforms to display her personal projects. She talked about how she is fully aware of the harm social media platforms bring while acknowledging they empower her to present her work to the world without gatekeeping. AI allows her to power through boring office tasks but she loathes their use in the art world and replacing people in general. | |||||||||||||||||||||||||||||
| ▲ | bigbadfeline 9 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
> It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology. I would insist that the deployments of a technology should be disconnected from the technology itself - I criticize AI too, and I get a lot of downvotes for it, but I try to separate the science of AI from its economics and politics. The harms of AI and other technologies come from two sources 1. Capital destroying market bubbles and 2. Deployments motivated and enabled by political and moral corruption. Both of these are in turn enabled and sustained by legislation. That is, we have to talk politics, not technology and not AI. AI has a great potential - both for improving human life and for making it a lot worse and which way it goes depends entirely on politics. If we fail to cleanly separate these issues and keep moralizing about technology, we will be chasing red herrings and bumping heads in the dark all the while the tech is being deployed against us. | |||||||||||||||||||||||||||||
| ▲ | embedding-shape 10 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
> there are a list of downsides that Drew views as non-negotiables Which is all fine and dandy. But why play the "You simply don't understand it as well as I do" rather than something more investigative and curious? Just fuels the whole "holier than thou" vibe Drew been trying to increase seemingly every day. It's a disagreement of opinion, not some "I'm the only smart person who can realize this", which is why it kind of sours the entire piece. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||