Remix.run Logo
txrx0000 2 hours ago

Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.

khafra 35 minutes ago | parent | next [-]

> we will need neural interfaces long term if we want to survive.

If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.

lebovic 2 hours ago | parent | prev [-]

Yeah, I think that's one way it could go!

I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.