Remix.run Logo
SecretDreams 2 days ago

> Collaboration makes sense on timeframes that don't imply zero-sum games.

People are fooling themselves if they think AGI will be zero sum. Even if only one group somehow miraculously develops it, there will immediately be fast followers. And, the more likely scenario is more than one group would independently pull it off - if it's even possible.

random3 2 days ago | parent | next [-]

Maybe, but at least Open AI, XAI and any Bostrom believer thinks this is the case.

Ilya Sutskever (Sep 20, 2017)

> The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.

Nick Bostrom - Decisive Strategic Advantage https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintel...

refulgentis 2 days ago | parent [-]

Bostrom / Ilya are agreeing with the GPs argument AFAICT: it's not that AGI can create a dictatorship-of-the-first-AGI-owner, it's that only having one serious funded lab going at it creates a knowledge gap of N years that could give said lab escape velocity*

* imagine if Google alone had LLMs. For an innocuous example, the only provider in my LLM client that regularly fails unit tests verifying they actually cache tokens and utilize them on a subsequent request is Gemini. I used to work at Google and it'd be horrible for that too-big-for-its-own-good institution regressing to the corporate mean to own LLMs all by itself

pixl97 2 days ago | parent | prev | next [-]

>if it's even possible.

Why do people keep repeating this. The only way artificial intelligence is impossible is if intelligence is impossible. And we're here so that pretty much removes that impediment.

2 days ago | parent | prev | next [-]
[deleted]
wrqvrwvq 2 days ago | parent | prev [-]

[dead]