| ▲ | maxutility 2 days ago | |
I found Sam's early 2015 posts on machine superintelligence and regulation [1] [2] to be even more interesting in hindsight, given OpenAI's accelerationist bent of late, OpenAI president Greg Brockman's lobbying efforts against AI regulation, and frequent accusations of attempted regulatory capture. [1] https://blog.samaltman.com/machine-intelligence-part-1 [2] https://blog.samaltman.com/machine-intelligence-part-2 Sam's recommendations at the time include: 1) Provide a framework to observe progress… 2) Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case. For example, beyond a certain checkpoint, we could require development happen only on airgapped computers…, require that certain parts of the software be subject to third-party code reviews, etc. 3) Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world. … 4) Provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research. 5) Provide a longer-term framework for how we figure out a safe and happy future for coexisting with SMI… Also, in his acknowledgments he gives the greatest thanks to onetime partner, now rival, Dario Amodei. | ||