Remix.run Logo
bko a day ago

It's pretty clear that proper meaningful AI regulation would require the equivalent of a one world government. Few AI alarmists talk about this, but to his credit, Eliezer Yudkowsky openly speaks about what he would like in terms of regulation:

> Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

> Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

In that regard, it's not too far of a stretch to consider a state that can effectively regulate math you perform as the Antichrist.

Same goes for climate change. Humans will produce carbon, and effective regulation on emissions will eventually lead to population control.

https://mleverything.substack.com/p/what-do-ai-doomers-want?...

marstall a day ago | parent [-]

for an example of a global risk that was mitigated without a world government, take a look at nuclear arms treaties like START, SALT, etc.

marcus_holmes a day ago | parent | next [-]

And banning CFCs so the hole in the Ozone layer started healing. We don't need population control to reduce carbon emissions to reasonable levels (note we don't need to prevent all emission of carbon; that's not the goal)

blooalien 12 hours ago | parent [-]

> "note we don't need to prevent all emission of carbon"

Yeah, we only need to cut back carbon emissions to the point that the Earth's natural carbon cycle systems can actually cope with it (and drastically cut back on the unchecked destruction and poisoning of Earth's natural coping systems in general while we're at it).

Dig1t a day ago | parent | prev [-]

Well if you apply the approach used for nuclear to AI the result would be invasive and authoritarian. The United States largely polices other countries nuclear efforts, at least in its sphere of influence. If we allowed it to police computation in the same way it polices nuclear, the result would be a massive invasion of privacy and autonomy that would result in a system which would be easily abused.

There are people talking seriously about drone striking data centers which are running unapproved AI models.

https://www.datacenterdynamics.com/en/news/be-willing-to-des...

marstall a day ago | parent [-]

well i'd suggest most countries are already regulating AI and will continue to do that with existing laws that protect privacy, the environment, worker safety, limit hate speech, etc. some of those regulations extend beyond national boundaries, like GDPR, etc. in the EU.

I think the fearmongering around AI may be overblown by its investors and promoters, but to the extent that some models may morph what it means for a country to be militarily secure, there's no reason why diplomacy, negotiation and de-escalation won't be the same powerful tools they often have been in the very human drive to mitigate the risk of conflict ...