| ▲ | jiggawatts 15 hours ago | |
> mislead and commit fraud at scale This is the "safety" messaging that OpenAI and Anthropic keep harping on and on, and on about, while whistling a merry tune as they turn around and sell AI to the US military and worse, to the tune of $billions/year already. The "and worse" needs elaboration, because fundamentally the single biggest cash cow for AI vendors will be (and maybe already is) implementing a dystopian future where everything we say, type, or do will not just be recorded but also: read, analysed, and cross-correlated by unfeeling heartless machines tasked with keeping us in line. I'm not being paranoid, President Biden said as much, but only in reference to China. If you think only China has motivation to use AI to keep a lid on dissent, I have a bridge to sell you. And if you think the Land Of The Free(tm) will never abuse AI in this manner, well... I have some bad news. You may want to sit down. Here in Australia, the cyberpunk dystopia is already starting to be rolled out. A customer of ours asked their IT team to hook up a variety of HR-related information sources to their new AI system tasked with making recommendations for hiring, promotion, and demotion. Welcome to 1984, citizen. | ||
| ▲ | dualvariable 13 hours ago | parent [-] | |
Yeah, AI-enabled surveillance capitalism is likely to be every bit as bad as what people imagine China is doing with their social credit scores. And the scary thing is that you can probably easily sell it to Democratic voters if you track racism scores for people, so you can filter people out of your dating pool or job/rental applications. Most people don't care about privacy as a fundamental right, and they'll roll over and compromise if you give them a way to track what they hate. You just need to make sure it is "bipartisan" and it'll be wildly popular. | ||