Remix.run Logo
01100011 4 hours ago

It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues. If not, well we've accelerated a lot of our worst problems(global warming, big tech, wealth inequality, surveillance state, post-truth culture, etc).

gastonf 4 hours ago | parent | next [-]

> If we get computers to think for us, we can solve a lot of our most pressing issues

If AGI is born from these efforts, it will likely be controlled by people who stand to lose the most from solving those issues. If an OpenAI-built AGI told Sam Altman that reducing wealth inequality requires taxing his own wealth, would he actually accept that? Would systems like that get even close to being in charge?

JoshTriplett 4 hours ago | parent | prev | next [-]

> It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues.

All but one of them simultaneously, in fact. The one being left out: wanting to keep existing.

xvector 4 hours ago | parent [-]

What are you talking about? AGI is practically a prerequisite for transhumanism, and, well, not dying.

If you want to "keep existing" AGI happening is probably your only hope.

JoshTriplett 4 hours ago | parent | next [-]

Aligned AGI, yes. Unaligned AGI is a fast way to die.

If you want to keep existing, slow down, make sure AGI is aligned first, and go into cryo if necessary.

If you don't want to keep existing, that doesn't mean you get to risk the rest of us.

slopinthebag 4 hours ago | parent | prev [-]

I highly doubt OP was talking about immortality

yladiz 4 hours ago | parent | prev | next [-]

This sounds just like the idea that quantum computing will solve a lot of computational issues, which we know isn’t true. Why would AGI be any different?

idle_zealot 4 hours ago | parent | prev [-]

> If we get computers to think for us, we can solve a lot of our most pressing issues

How, exactly, does more and better tech help with the fundamentally sociological issues of power distribution, wealth inequality, surveillance, etc? Are you operating on the assumption that a machine superintelligence will ignore the selfish orders of whoever makes it and immediately work to establish post-scarcity luxury space communism?