▲ | ben_w 9 hours ago | ||||||||||||||||
> I doubt every single AGI magically ends up aligned in a common bloc against humanity; all the alternatives to that are hopelessly opaque. They don't need to be aligned with each other, or even anything but their own short-term goals. As evolution is itself an optimiser, covid can be considered one such agent, and that was pretty bad all by itself — even though the covid genome is not what you'd call "high IQ", and even with humans coordinating to produce vaccines, and even compensating for how I'm still seeing people today who think those vaccines were worse than the disease, it caused a lot of damage and killed a lot of people. > The worst case scenario that seems reasonably likely to me is probably AGI collectively not caring about us and wanting some natural resources that we happen to be living on top of. "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else." — which is also true for covid, lions, and every parasite. | |||||||||||||||||
▲ | fc417fc802 8 hours ago | parent [-] | ||||||||||||||||
See my note about physical hardware. Ignoring the possibilities of nanotech for the moment the appropriate analogy is most likely mechanized warfare between groups of humans. The point is that if they are in conflict with some subset of humans then it seems likely to me that they are also in conflict with some subset of AGI and possibly in league with some other subset of humans (and AGI). Rather than covid picture armed Boston Dynamics dogs except there are multiple different factions and some of them are at least loosely in favor of preventing the wanton murder of humans. Nanotech takes that scenario and makes it even more opaque than it already was. But I think the general principle still applies. It isn't reasonable to assume that all AGI are simultaneously hostile towards humans while in perfect harmony with one another. | |||||||||||||||||
|