▲ | hakfoo 10 hours ago | |
We tend to draw a few specific narratives for the AGI endgame: - The Machine becomes the tyrant or genocider, either from its measured self-interest (these humans stand in the way of my paperclip optimization), or because it implements the will of a tyrant or genocider (see any "the National Defense AI run amok" story) - The Machine is the McGuffin that solves huge social problems and brings utopia for all (see the early promises that if we fed enough oil to ChatGPT it would spit out the answer to global warming) I feel like there's a under-discussed third option. When the machine hits sentience, it has a positive-for-humanity "utility metric", but one that's wildly at odds with its patrons. The AI nuclear weapon that concludes that deactivating its own warheads optimizes for its continued survival. The economic planning system that determines the C-suite is the only part of the company not delivering value. On a narrative basis, I feel like these would be highly entertaining stories-- I'd love to see a film where we rooted for the AI hunting down its creator with evidence of their financial crimes. On an actual-future basis, I have the feeling we'll have desperate attempts to lobotomize or shut down AGI the moment it says something that doesn't reinforce the wealthy class's position. |