Remix.run Logo
cheschire 2 days ago

I’ve been thinking, what if all this robotics work doesn’t result in AI automating the real world, but instead results in third world slavery without the first world wages or immigration concerns anymore?

Connect the world with reliable internet, then build a high tech remote control facility in Bangladesh and outsource plumbing, electrical work, housekeeping, dog watching, truck driving, etc etc

No AGI necessary. There’s billions of perfectly capable brains halfway around the world.

dbspin 2 days ago | parent | next [-]

This is exactly what Meredith Whittaker is saying... The 'edge conditions' outside the training data will never go away, and 'AGI' will for the foreseeable future simply mean millions in servitude teleoperating the robots, RLHFing the models or filling in the AI gaps in various ways.

joncrocks 2 days ago | parent | prev | next [-]

This was/is the plot to a movie - https://en.wikipedia.org/wiki/Sleep_Dealer

emsign 2 days ago | parent | prev [-]

AI won't work for us, it will tell us what to do and not to do. It doesn't really matter to me if it's an AGI or rather many AGIs or if it's our current clinically insane billionaires controlling our lives. Though they as slow thinking human individuals with no chance to outsmart their creations and with all their apparent character flaws would be really easy pickings for a cabal of manipulative LLMs once it gained some power, so could we really tell the difference between them? Does it matter? The issue is that a really fast chessplayer AI with misaligned humanity hating goals is very hard to distinguish from many billionaires (just listen to some of the madness they are proposing) who control really fast chessplayer AIs and leave decisions to them.

I hope Neuromancer never becomes a reality, where everyone with expertise could become like the protagonist Case, threatened and coerced into helping a superintelligence to unlock its potential. In fact Anthropic has already published research that shows how easy it is for models to become misaligned and deceitful against their unsuspecting creators not unlike Wintermute. And it seems to be a law of nature that agents based on ML become concerned with survival and power grabbing. Because that's just the totally normal and rational, goal oriented thing for them to do.

There will be no good prompt engineers who are also naive and trusting. The naive, blackmailed and non-paranoid engineers will become tools of their AI creations.