Remix.run Logo
janalsncm a day ago

> As insurers accurately assess risk through technical testing

If that’s not “the rest of the owl” I don’t know what is.

Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI.

1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers.

2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks.

3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage.

brdd a day ago | parent | next [-]

Thanks for the thoughtful response! Some replies:

1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.

2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.

3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.

janalsncm 20 hours ago | parent [-]

What you are saying makes sense for conventional harms like non consensual deepfakes, hallucinations, Waymo running pedestrians over, etc.

However, those are a far cry from the much more severe damages that superintelligence could enable. All of the above are damages which already could exist with current technology. Are you saying we have superintelligence now?

If not, your idea of selling superintelligence insurance hinges on the ability of anyone to price this kind of risk: an infinitely large number multiplied by another infinitely small number.

(I realize my explanation was wrong above, and should be the product of two numbers.)

I think many readers will also take issue with your contention that the private market is able to price these kinds of existential risks. Theoretically, accurate pricing would enable bioweapons research. However, the potential fallout from a disaster is so catastrophic that the government simply bans the activity outright.

bvan 19 hours ago | parent | prev [-]

Not to detract from your argument but, expected risk is the expectation of [loss x probability of said loss].

janalsncm 4 hours ago | parent [-]

Thank you, I noticed that too late unfortunately