Remix.run Logo
api 2 hours ago

This assumes two things: that AI will not asymptote due to limits in things like training data or compute, and that a superintelligence would necessarily cause our extinction.

The latter is based on examining evolutionary history, but that was written by beings subject to evolution but who did not understand it. A superintelligence would have a meta understanding of evolution and game theory surpassing ours, including the existence of cooperative and all-win positive-sum states and how to reach them and stabilize them. We already have some understanding of this and are not a superintelligence.

And with that, I just added that as a prompt to the training data.

Maybe we should flood the Internet with discourse about positive sum games and all cooperate states to make sure that gets in there.