▲ | marstall a day ago | |||||||
for an example of a global risk that was mitigated without a world government, take a look at nuclear arms treaties like START, SALT, etc. | ||||||||
▲ | marcus_holmes a day ago | parent | next [-] | |||||||
And banning CFCs so the hole in the Ozone layer started healing. We don't need population control to reduce carbon emissions to reasonable levels (note we don't need to prevent all emission of carbon; that's not the goal) | ||||||||
| ||||||||
▲ | Dig1t a day ago | parent | prev [-] | |||||||
Well if you apply the approach used for nuclear to AI the result would be invasive and authoritarian. The United States largely polices other countries nuclear efforts, at least in its sphere of influence. If we allowed it to police computation in the same way it polices nuclear, the result would be a massive invasion of privacy and autonomy that would result in a system which would be easily abused. There are people talking seriously about drone striking data centers which are running unapproved AI models. https://www.datacenterdynamics.com/en/news/be-willing-to-des... | ||||||||
|