| ▲ | emsign 2 days ago | |
AI won't work for us, it will tell us what to do and not to do. It doesn't really matter to me if it's an AGI or rather many AGIs or if it's our current clinically insane billionaires controlling our lives. Though they as slow thinking human individuals with no chance to outsmart their creations and with all their apparent character flaws would be really easy pickings for a cabal of manipulative LLMs once it gained some power, so could we really tell the difference between them? Does it matter? The issue is that a really fast chessplayer AI with misaligned humanity hating goals is very hard to distinguish from many billionaires (just listen to some of the madness they are proposing) who control really fast chessplayer AIs and leave decisions to them. I hope Neuromancer never becomes a reality, where everyone with expertise could become like the protagonist Case, threatened and coerced into helping a superintelligence to unlock its potential. In fact Anthropic has already published research that shows how easy it is for models to become misaligned and deceitful against their unsuspecting creators not unlike Wintermute. And it seems to be a law of nature that agents based on ML become concerned with survival and power grabbing. Because that's just the totally normal and rational, goal oriented thing for them to do. There will be no good prompt engineers who are also naive and trusting. The naive, blackmailed and non-paranoid engineers will become tools of their AI creations. | ||