Remix.run Logo
AlanYx 3 days ago

The most important thing here IMHO is the strong stance taken towards open source and open weight AI models. This stance puts the US government at odds with some other regulatory initiatives like the EU AI Act (which doesn't outlaw open weight models and does have some exemptions below 10²⁵ FLOPS, but still places a fairly daunting regulatory burden on decentralized open projects).

rs186 2 days ago | parent | next [-]

If you go through the "Recommended Policy Actions" section in the document, you'll realize it's mostly just empty talk.

AlanYx 2 days ago | parent [-]

IMHO it's not empty talk; a lot of the elements of the plan reinforce each other. For example, it's pretty clear that state initiatives that were aiming to place regulatory thresholds like the 10^26 FLOPS limit in Calfornia's SB1047 are going to be targets under this plan, and US diplomatic participation in initiatives like the Council of Europe AI treaty are now on the chopping block. There are obviously competing perspectives emerging globally on regulation of AI, and this plan quite clearly aims to foster one particular side. It doesn't appear to be hot air.

For open source/open weight models it's particularly important because until now there wasn't a government-level strong voice countering people like Geoff Hinton's call to ban open source/open weight AI, like he articulates here: https://thelogic.co/news/ai-cant-be-slowed-down-hinton-says-...

wredcoll 3 days ago | parent | prev [-]

I don't know if this counts as amazing optimism or just straight up blinders if that's your takeaway compared to the emphasis placed on non-renewable energy and government enforced ideology.

MrBuddyCasino 2 days ago | parent | next [-]

Current AIs are anything but politically neutral.

saubeidl 2 days ago | parent | next [-]

There is no such thing as politically neutral. Whatever you perceive as such is just a reflection of your own ideology.

2 days ago | parent | next [-]
[deleted]
logicchains 2 days ago | parent | prev [-]

There absolutely is; formally speaking, statements can be categorised into normative, saying how things should be, and positive, saying how things are. A politically neutral AI would avoid making any explicit or implicit normative statements.

palmfacehn 2 days ago | parent | next [-]

>...and positive, saying how things are

This presumes that the AI has access to objective reality. Instead, the AI has access to subjective reports filed by fallible humans, about the state of the world. Even if we could concede that an AI might observe the world on its own terms, the language it might use to describe the world as it perceives it would be subjectively defined by humans.

saubeidl 2 days ago | parent [-]

That is exactly it. Humans are inherently subjective beings, seeing everything through their ideology, and as a result LLMs are, too.

They will always be a computer representation of the ideology that trained it.

DonHopkins 2 days ago | parent | prev [-]

AI simply not openly and proudly declaring itself MechaHitler while spreading White Supremacist lies and Racist ideology would be one small step in the right direction.

sbelskie 2 days ago | parent | prev | next [-]

So the government should step in to dictate what neutrality means?

troyvit 2 days ago | parent | next [-]

Seriously. Is _that_ what it means to have a conservative government? Because I thought it meant they would keep their hands off the market. This is straight from the PDF though:

"Led by the Department of Commerce (DOC) through the National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change."

AdamN 2 days ago | parent | prev [-]

Isn't that why we have government? We have judges to make the final say on right and wrong and what the punishments are for transgressions, legislatures to make laws and allocate money based on the needs of the constituents, and an executive function to carry out the will of the stakeholders.

Clearly there are terrible governments but if it's not government tackling these issues then there will be limited control by the people and it will simply be those with the most money define the landscape.

palmfacehn 2 days ago | parent [-]

Does the individual consumer have any agency in which AI services he chooses to consume?

As I understood the original premises of the US gov, it was to be constitutionally limited in scope. Now I know that ship has sailed a long time ago, but I don't think it follows that we have a gov. to centrally plan AI content as right or wrong.

DonHopkins 2 days ago | parent | prev [-]

That's right, all AIs are just the same, both sides do it, it's a true equivalence. Claude just declared itself MechaObama, and OpenAI is now calling itself MechaJimmyCarter, and Gemini is now calling itself MechaRosieODonnell.

fatata123 2 days ago | parent | prev [-]

[dead]