▲ | Dig1t a day ago | |
Well if you apply the approach used for nuclear to AI the result would be invasive and authoritarian. The United States largely polices other countries nuclear efforts, at least in its sphere of influence. If we allowed it to police computation in the same way it polices nuclear, the result would be a massive invasion of privacy and autonomy that would result in a system which would be easily abused. There are people talking seriously about drone striking data centers which are running unapproved AI models. https://www.datacenterdynamics.com/en/news/be-willing-to-des... | ||
▲ | marstall a day ago | parent [-] | |
well i'd suggest most countries are already regulating AI and will continue to do that with existing laws that protect privacy, the environment, worker safety, limit hate speech, etc. some of those regulations extend beyond national boundaries, like GDPR, etc. in the EU. I think the fearmongering around AI may be overblown by its investors and promoters, but to the extent that some models may morph what it means for a country to be militarily secure, there's no reason why diplomacy, negotiation and de-escalation won't be the same powerful tools they often have been in the very human drive to mitigate the risk of conflict ... |