▲ | janalsncm a day ago | |||||||
> As insurers accurately assess risk through technical testing If that’s not “the rest of the owl” I don’t know what is. Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI. 1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers. 2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks. 3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage. | ||||||||
▲ | brdd a day ago | parent | next [-] | |||||||
Thanks for the thoughtful response! Some replies: 1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it. 2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc. 3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks. | ||||||||
| ||||||||
▲ | bvan 19 hours ago | parent | prev [-] | |||||||
Not to detract from your argument but, expected risk is the expectation of [loss x probability of said loss]. | ||||||||
|