Remix.run Logo
neom 4 hours ago

"For AGI and superintelligence (we refrain from imposing precise definitions of these terms, as the considerations in this paper don't depend on exactly how the distinction is drawn)" Hmm, is that true? His models actually depend quite heavily on what the AI can do, "can reduce mortality to 20yo levels (yielding ~1,400-year life expectancy), cure all diseases, develop rejuvenation therapies, dramatically raise quality of life, etc. Those assumptions do a huge amount of work in driving the results. If "AGI" meant something much less capable, like systems that are transformatively useful economically but can't solve aging within a relevant timeframe- the whole ides shifts substantially, surly the upside shrinks and the case for tolerating high catastrophe risk weakens?

Ucalegon 4 hours ago | parent | next [-]

That is the thing about these conversations, is that the issue is potentiality. It comes back to Amara's Law; “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Its the same thing with nuclear energy in the 1950s about what could be without realizing that those potentials are not possible due to the limitations of the technology, and not stepping into the limitations realistically, that hampers the growth, and thus development, in the long term.

Sadly, there is way, way, way too much money in AGI, and the promise of AGI, for people to actually take a step back and understand the implications of what they are doing in the short, medium, or long term.

suddenlybananas 2 hours ago | parent [-]

What was underestimated in the long term with nuclear power? I like nuclear power but I don't see what long-term effects were underestimated by people in the 50s.

themagic80 an hour ago | parent [-]

I guess an example would be short term. "A pocked nuclear reactor in every car powering our commute to work" vs long term change "Nuclear power powering vast datacenters that do most of the work for us"

ViscountPenguin 4 hours ago | parent | prev | next [-]

The earliest bits of the paper cover the case for significantly smaller life expectancy improvements. Given the portion of people in the third world who live incredibly short lives for primarily economic (and not biological) reasons it seems plausible that a similar calculus would hold even without massive life extension improvements.

I'm bullish on the ai aging case though, regenerative medicine has a massive manpower issue, so even sub-ASI robotic labwork should be able to appreciably move the needle.

logicchains 3 hours ago | parent [-]

>Given the portion of people in the third world who live incredibly short lives

Third world countries have lower average life expectancies because infant mortality is higher; many more children die before age 5. But the life expectancy at age 5 in third world countries is not much different to the life expectancy at age 5 in America.

ViscountPenguin 3 hours ago | parent [-]

Maybe incredibly low is an overstatement, but Nigeria for example could easily add another 18 years of life expectancy (to match that of white Australians) at age 15 if their economic issues were resolved.

artninja1988 4 hours ago | parent | prev [-]

I guess argument seems to be that any AI capable of eliminating all of humanity would necessarily be intelligent enough to cure all diseases. This appears plausible to me because achieving total human extinction is extraordinarily difficult. Even engineered bioweapons would likely leave some people immune by chance, and even a full-scale nuclear exchange would leave survivors in bunkers or remote areas

cameldrv 4 hours ago | parent | next [-]

Humans have driven innumerable species to extinction without even really trying, they were just in the way of something else we wanted. I can pretty easily think of a number of ways an AI with a lot of resources at its disposal could wipe out humanity with current technology. Honestly we require quite a bit of food and water daily, can't hibernate/go dormant, and are fairly large and easy to detect. Beyond that, very few living people still know truly how to live off the land. We generally require very long supply chains for survival.

I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.

plastic-enjoyer 3 hours ago | parent [-]

> I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.

Yes, but neither do I see why an AGI should do the opposite. The arguments about an AGI that drives us to extinction do sound like projection to me. People extrapolate from human behaviour how a superintelligence will behave, assuming that what seems rational to us is also rational to AI. A lot of the described scenarios of malicious AI do more read like a natural history of human behaviour.

wmf 4 hours ago | parent | prev [-]

When you put it that way, it sounds much easier to wipe out ~90% of humanity than to cure all diseases. This could create a "valley of doom" where the downsides of AI exceed the upsides.