Remix.run Logo
artninja1988 4 hours ago

I guess argument seems to be that any AI capable of eliminating all of humanity would necessarily be intelligent enough to cure all diseases. This appears plausible to me because achieving total human extinction is extraordinarily difficult. Even engineered bioweapons would likely leave some people immune by chance, and even a full-scale nuclear exchange would leave survivors in bunkers or remote areas

cameldrv 4 hours ago | parent | next [-]

Humans have driven innumerable species to extinction without even really trying, they were just in the way of something else we wanted. I can pretty easily think of a number of ways an AI with a lot of resources at its disposal could wipe out humanity with current technology. Honestly we require quite a bit of food and water daily, can't hibernate/go dormant, and are fairly large and easy to detect. Beyond that, very few living people still know truly how to live off the land. We generally require very long supply chains for survival.

I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.

plastic-enjoyer 3 hours ago | parent [-]

> I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.

Yes, but neither do I see why an AGI should do the opposite. The arguments about an AGI that drives us to extinction do sound like projection to me. People extrapolate from human behaviour how a superintelligence will behave, assuming that what seems rational to us is also rational to AI. A lot of the described scenarios of malicious AI do more read like a natural history of human behaviour.

wmf 4 hours ago | parent | prev [-]

When you put it that way, it sounds much easier to wipe out ~90% of humanity than to cure all diseases. This could create a "valley of doom" where the downsides of AI exceed the upsides.