| ▲ | neom 4 hours ago | ||||||||||||||||||||||
"For AGI and superintelligence (we refrain from imposing precise definitions of these terms, as the considerations in this paper don't depend on exactly how the distinction is drawn)" Hmm, is that true? His models actually depend quite heavily on what the AI can do, "can reduce mortality to 20yo levels (yielding ~1,400-year life expectancy), cure all diseases, develop rejuvenation therapies, dramatically raise quality of life, etc. Those assumptions do a huge amount of work in driving the results. If "AGI" meant something much less capable, like systems that are transformatively useful economically but can't solve aging within a relevant timeframe- the whole ides shifts substantially, surly the upside shrinks and the case for tolerating high catastrophe risk weakens? | |||||||||||||||||||||||
| ▲ | Ucalegon 4 hours ago | parent | next [-] | ||||||||||||||||||||||
That is the thing about these conversations, is that the issue is potentiality. It comes back to Amara's Law; “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Its the same thing with nuclear energy in the 1950s about what could be without realizing that those potentials are not possible due to the limitations of the technology, and not stepping into the limitations realistically, that hampers the growth, and thus development, in the long term. Sadly, there is way, way, way too much money in AGI, and the promise of AGI, for people to actually take a step back and understand the implications of what they are doing in the short, medium, or long term. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | ViscountPenguin 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
The earliest bits of the paper cover the case for significantly smaller life expectancy improvements. Given the portion of people in the third world who live incredibly short lives for primarily economic (and not biological) reasons it seems plausible that a similar calculus would hold even without massive life extension improvements. I'm bullish on the ai aging case though, regenerative medicine has a massive manpower issue, so even sub-ASI robotic labwork should be able to appreciably move the needle. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | artninja1988 4 hours ago | parent | prev [-] | ||||||||||||||||||||||
I guess argument seems to be that any AI capable of eliminating all of humanity would necessarily be intelligent enough to cure all diseases. This appears plausible to me because achieving total human extinction is extraordinarily difficult. Even engineered bioweapons would likely leave some people immune by chance, and even a full-scale nuclear exchange would leave survivors in bunkers or remote areas | |||||||||||||||||||||||
| |||||||||||||||||||||||