| ▲ | SecretDreams 2 days ago | |||||||
> Collaboration makes sense on timeframes that don't imply zero-sum games. People are fooling themselves if they think AGI will be zero sum. Even if only one group somehow miraculously develops it, there will immediately be fast followers. And, the more likely scenario is more than one group would independently pull it off - if it's even possible. | ||||||||
| ▲ | random3 2 days ago | parent | next [-] | |||||||
Maybe, but at least Open AI, XAI and any Bostrom believer thinks this is the case. Ilya Sutskever (Sep 20, 2017) > The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility. Nick Bostrom - Decisive Strategic Advantage https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintel... | ||||||||
| ||||||||
| ▲ | pixl97 2 days ago | parent | prev | next [-] | |||||||
>if it's even possible. Why do people keep repeating this. The only way artificial intelligence is impossible is if intelligence is impossible. And we're here so that pretty much removes that impediment. | ||||||||
| ▲ | 2 days ago | parent | prev | next [-] | |||||||
| [deleted] | ||||||||
| ▲ | wrqvrwvq 2 days ago | parent | prev [-] | |||||||
[dead] | ||||||||