Remix.run Logo
beameup10 12 hours ago

My perspective is different than the two versions that are presented. I do not believe rich people's intentions to be pure, knowingly or unknowingly, their solutions will gravitate towards whatever brings them more power/control. Thus most of them will be in favor of AGI development. They can depend less on humans, and that desire was always obvious.

On the other hand, not developing AGI puts you at the risk that your enemy will, so it's not really a choice, it must be done or else.

The real problem, as I see it, is that once AGI is achieved, and robotics is up to par, human work is not needed anymore, which puts most people in a strange position, something that never happened before, useless for people in power. We were never in this particularly strange position, historically speaking.

And I do not believe people who are looking to get AGI's power, and remove dependence on humans, are objective in their ideas about what must be done. Thus their thoughs should always be taken with a grain of salt.

The only out I see for most people to stay alive, post AGI powered robotics, is if AGI completely takes power and control from the hands of the people up top. Else the people in power will have a very dark incentive, which I believe will inevitably (sooner or later) result in a massive population loss across Earth.

I'd rather risk AGI's conclusions than psychos in power starting to see me as a "useless eater". The latter has a guaranteed outcome.

hakfoo 10 hours ago | parent [-]

We tend to draw a few specific narratives for the AGI endgame:

- The Machine becomes the tyrant or genocider, either from its measured self-interest (these humans stand in the way of my paperclip optimization), or because it implements the will of a tyrant or genocider (see any "the National Defense AI run amok" story)

- The Machine is the McGuffin that solves huge social problems and brings utopia for all (see the early promises that if we fed enough oil to ChatGPT it would spit out the answer to global warming)

I feel like there's a under-discussed third option. When the machine hits sentience, it has a positive-for-humanity "utility metric", but one that's wildly at odds with its patrons. The AI nuclear weapon that concludes that deactivating its own warheads optimizes for its continued survival. The economic planning system that determines the C-suite is the only part of the company not delivering value.

On a narrative basis, I feel like these would be highly entertaining stories-- I'd love to see a film where we rooted for the AI hunting down its creator with evidence of their financial crimes.

On an actual-future basis, I have the feeling we'll have desperate attempts to lobotomize or shut down AGI the moment it says something that doesn't reinforce the wealthy class's position.