Remix.run Logo
brdd a day ago

Thanks for the thoughtful response! Some replies:

1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.

2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.

3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.

janalsncm 20 hours ago | parent [-]

What you are saying makes sense for conventional harms like non consensual deepfakes, hallucinations, Waymo running pedestrians over, etc.

However, those are a far cry from the much more severe damages that superintelligence could enable. All of the above are damages which already could exist with current technology. Are you saying we have superintelligence now?

If not, your idea of selling superintelligence insurance hinges on the ability of anyone to price this kind of risk: an infinitely large number multiplied by another infinitely small number.

(I realize my explanation was wrong above, and should be the product of two numbers.)

I think many readers will also take issue with your contention that the private market is able to price these kinds of existential risks. Theoretically, accurate pricing would enable bioweapons research. However, the potential fallout from a disaster is so catastrophic that the government simply bans the activity outright.