Remix.run Logo
ben_w 9 hours ago

> I doubt every single AGI magically ends up aligned in a common bloc against humanity; all the alternatives to that are hopelessly opaque.

They don't need to be aligned with each other, or even anything but their own short-term goals.

As evolution is itself an optimiser, covid can be considered one such agent, and that was pretty bad all by itself — even though the covid genome is not what you'd call "high IQ", and even with humans coordinating to produce vaccines, and even compensating for how I'm still seeing people today who think those vaccines were worse than the disease, it caused a lot of damage and killed a lot of people.

> The worst case scenario that seems reasonably likely to me is probably AGI collectively not caring about us and wanting some natural resources that we happen to be living on top of.

"The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else." — which is also true for covid, lions, and every parasite.

fc417fc802 8 hours ago | parent [-]

See my note about physical hardware. Ignoring the possibilities of nanotech for the moment the appropriate analogy is most likely mechanized warfare between groups of humans. The point is that if they are in conflict with some subset of humans then it seems likely to me that they are also in conflict with some subset of AGI and possibly in league with some other subset of humans (and AGI).

Rather than covid picture armed Boston Dynamics dogs except there are multiple different factions and some of them are at least loosely in favor of preventing the wanton murder of humans.

Nanotech takes that scenario and makes it even more opaque than it already was. But I think the general principle still applies. It isn't reasonable to assume that all AGI are simultaneously hostile towards humans while in perfect harmony with one another.

ben_w 6 hours ago | parent [-]

Physical hardware isn't a good argument against risk.

The hardware we ourselves are dependent on is increasingly automated; and even if it wasn't, there's plenty of failure modes where an agent destroys its own host and then itself dies off. Happens with predator-prey dynamics, happens with cancer, happens with ebola. People are worried it will happen with human-caused environmental degradation.

> Rather than covid picture armed Boston Dynamics dogs except there are multiple different factions and some of them are at least loosely in favor of preventing the wanton murder of humans.

Or imagine an LLM, with no hardware access at all, that's just used for advice. Much better than current ones, so that while it still has some weird edge cases where it makes mistakes (because the humans it learned from would also make mistakes), it's good enough that everyone ends up just trusting it the way we trust Google Maps for directions. (Trust is the key: people already trust current LLMs more than they ought to, so the quality of this hypothetical future LLM doesn't need to be high enough to logically deserve the trust, just good enough for real people to actually use it like this).

And then we metaphorically drive over the edge of a destroyed bridge.

Putting too much trust in a system — to the extent everyone just blindly accepts the answer — is both normal, and dangerous. At every level, from "computer says no" in customer support roles, to Bay of Pigs, to the 2007 financial crisis, to the Irish potato famine and the famine in the Great Leap Forward, etc.

It won't be a mistake along the lines of "there's a fire in Las Vegas and not enough water to put them out, so let's empty the dams in north California that aren't hydrologically connected" followed by six months later "Why are the farms in Central Valley not producing any food?"

But here's an open question with current tech: are algorithmic news feeds and similar websites, which optimise for engagement, making themselves so engaging that they preclude real social connection, and thereby reduce fertility?

Or another: are dating apps and websites motivated to keep people paying subscriptions, and therefore inherently prefer to pair people together only if they're going to split quickly afterwards and be back in the dating market, again leading to lower fertility rates?

There's many ways to die — More than we can count, which is why we're vulnerable. If it was just about killer robots, we could say "don't build killer robots"*. The problem is that you could fill a library with non-fiction books about people who have already lived, getting exactly what they wished for and regretting it.

* for now, at least — Red Queen race dynamics mean that even that isn't stable in the long run.

fc417fc802 6 hours ago | parent [-]

I think you've misunderstood me. I wasn't arguing against risk in any general sense - quite the opposite.

My only position is that "and then the AGI killed everyone" is so oversimplified that it makes children's bedtime stories look sophisticated.

Physical hardware is relevant because unlike computer programs results aren't instant. The supply of materials, manufacturer, and deployment are all slow processes. Opponents generally have time to react, and if AGI is an opponent then I think it's safe to assume that it also has to contend with other AGI.

I am unconvinced that there should be a clearly delineated divide with all AGI on one side and all humans on the other.

I agree with you about the myriad non-combat risks but again I think the prediction is one of hopeless complexity. For every problem that's inadvertently created a solution can probably be arrived at to patch things up. It's not unlike the present, just faster. Break things, fix them, but the fix breaks other things, repeat ad nauseum. Hopefully we survive the process.