Remix.run Logo
zem 5 hours ago

only if said galactic superintelligence takes immediate steps to kill all its potential competitors, or hoover up all the world's resources, or some other aggressively zero sum thing. otherwise I don't see what difference it makes down the line of you have the second superintelligence rather than the first.

and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.

dullcrisp 41 minutes ago | parent | next [-]

Well no because no one is going to be coming in to work building the next AI model after the Singularity.

We’ll all be bblbrvkxn46?/4!gfbxf’mgv5fhxtgcsgjcucz to buvtcibycuvinovrYdyvuctYcrzuvhxh gcuch7…:!

ethin 4 hours ago | parent | prev | next [-]

This is also assuming that AGI is even possible. So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).

Edit: Meant to say AGI (superintelligence didn't make sense). Superintelligence is undefinable at the moment so even considering if it's possible or not is more of a philosophical thing/si-fi thought experiment than anything else.

zem 3 hours ago | parent | next [-]

oh absolutely, no argument there, the case for AGI is pretty weak. I was just saying that I am even more sceptical that any of this is a "first or nothing" scenario - that is one of my biggest pet peeves about the entire tech sector.

josephg 3 hours ago | parent | prev [-]

ASI is the acronym you’re looking for. It stands for Artificial Superintelligence.

Arguably it’s already here. ChatGPT knows more than any human who has ever lived. It can carry out millions of conversations at once. And it has better working memory (“context”) than humans. And it can speak and write code much faster than humans.

Humans still have some advantages: Specialists are smarter than chatgpt in most domains. We’re better at using imagination. We understand the physical world better. But it seems like we’re watching the gap close in real time. A few years ago chatgpt could barely program. Now you can give it complex prompts and it can write large, complex programs which mostly work. If you extrapolate forward, is there any good reason to think humans will retain a lead?

marcus_holmes 5 minutes ago | parent [-]

ChatGPT can only respond to a prompt, and in the context of that prompt. It has no continuous awareness of anything. That isn't superintelligence. We are easily fooled because we have stupid monkey brains.

fwipsy 4 hours ago | parent | prev | next [-]

Anthropic/OpenAI aren't planning to have their superintelligence take over the world, but they're still afraid that someone else will do it.

sroussey 5 hours ago | parent | prev | next [-]

One could argue that AI has already started to hoover up all the world’s resources. AI buildout as a percent of GDP is already high and still rising.

munk-a 5 hours ago | parent [-]

Don't blame machines for our folly. This is just standard bubble behavior.

pocksuppet an hour ago | parent [-]

What if that's just the mechanism the machines take over the world?

Natural selection doesn't care why something replicated a lot.

zozbot234 4 hours ago | parent | prev [-]

If OpenAI has the second superintelligence they have to merge with the first and cooperate. It's a provision in their charter.

airstrike 4 hours ago | parent [-]

I'm not sure anyone thinks their charter carries much weight at this point.