| ▲ | ed 5 hours ago | |||||||
This paper argues that if superintelligence can give everyone the health of a 20 year-old, we should accept a 97% percent chance of superintelligence killing everyone in exchange for the 3% chance the average human lifespan rises to 1400 years old. | ||||||||
| ▲ | paulmooreparks 5 hours ago | parent | next [-] | |||||||
There is no "should" in the relevant section. It's making a mathematical model of the risks and benefits. > Now consider a choice between never launching superintelligence or launching it immediately, where the latter carries an % risk of immediate universal death. Developing superintelligence increases our life expectancy if and only if: > [equation I can't seem to copy] > In other words, under these conservative assumptions, developing superintelligence increases our remaining life expectancy provided that the probability of AI-induced annihilation is below 97%. | ||||||||
| ▲ | wmf 4 hours ago | parent | prev | next [-] | |||||||
That's what the paper says. Whether you would take that deal depends on your level of risk aversion (which the paper gets into later). As a wise man once said, death is so final. If we lose the game we don't get to play again. | ||||||||
| ||||||||
| ▲ | measurablefunc 5 hours ago | parent | prev [-] | |||||||
Bostrom is very good at theorycrafting. | ||||||||