Remix.run Logo
LorenDB 3 days ago

> Encourage Open-Source and Open-Weight AI

It's good to see this, especially since they acknowledge that open weights is not equal to open source.

rs186 2 days ago | parent | next [-]

Without providing actual support like money, the government saying they encourage open-* AI is no more meaningful than me saying the same thing.

In fact, if you open the PDF file and navigate to that section, the content is barely relevant at all.

SkyMarshal 2 days ago | parent | next [-]

We're clearly in an era where the US Govt simply doesn't have enough money to throw at everything it wants to encourage, and needs to develop alternate means of incentivizing (or de-disincentivizing) those things. Sensible minimal regulation is one, there may be others. Time to get creative and resourceful.

AvAn12 2 days ago | parent | next [-]

The budget is the policy, stripped of rhetoric. What any government spends money on IS a full and complete expression of its priorities. The rest is circus.

What increased and decreased in the most recent budget bill? That is the full and complete story.

If no $$ for open source or open weight model development, then that is not a policy priority, despite any nice words to the contrary.

berbec 2 days ago | parent | prev | next [-]

The US has been continuously running a budget deficit for decades (brief blip at the end of Clinton/beginning of W Bush). This is more of an "epoch" than "era". I love the idea of incentives that aren't tax breaks!

mdhb 2 days ago | parent | prev | next [-]

It’s genuinely bizarre to read a comment like this which seems to imply there is some kind of grand strategy behind this when the reality is and always has been “own the libs”.

They very clearly have no idea what the fuck they are doing they just know what other people say they should do and their toddler reaction is to do the opposite.

_DeadFred_ 2 days ago | parent | prev [-]

AI, which they are hoping takes over EVERYTHING, is probably one of the worthwhile ones for government to be involved in. If it has the chance to be this revolutionary, which would be better:

The government owning the machine that does everything.

Tech bros, with their recent love of guruship, with their willingness to do any dark pattern if it means bigger boats for them, owning the entire labor supply in order to improve the lives of 8 bay area families.

throw14082020 2 days ago | parent | prev [-]

Even if they did provide more money, it doesn't mean it'll go to the right place. Government money is not the solution here. Money is already being spent.

jonplackett 2 days ago | parent | prev | next [-]

How can this work with their main goal of assuring American superiority? If it’s open weights anyone else can use it too.

alganet 2 days ago | parent | next [-]

It doesn't say anything about open training corpus of data.

The USA supposedly have the most data in the world. Companies cannot (in theory) train on integrated sets of information. USA and China to some extent, can train on large amounts of information that is not public. USA in particular has been known for keeping a vast repository of metadata (data about data) about all sorts of things. This data is very refined and organized (PRISM, etc).

This allows training for purposes that might not be obvious when observing the open weights or the source of the inference engine.

It is a double-edged sword though. If anyone is able to identify such non-obvious training inserts and extract information about them or prove they were maliciously placed, it could backfire tremendously.

vharuck 2 days ago | parent [-]

So DOGE might not be consolidating and linking data just for ICE, but for providing to companies as a training corpus? In normal times, I'd laugh that off as a paranoiac fever dream.

dudeinjapan 2 days ago | parent | next [-]

If AI were trained on troves of personal info like SSNs, emails, phones then the leakage would be easily discovered and the model would be worthless for any commercial/mass-consumption purpose. (This doesnt rule out a PRISM-AI for NSA purposes of course.)

alganet a day ago | parent [-]

The way you describe it make PRISM sound like a contact book. I think it more like unwilling facebook.

alganet a day ago | parent | prev [-]

Companies can change hands easier than governments. I would assume the US isn't sharing anything exclusive with private commercial entities. Doing so would be a mistake in my opinion.

sunaookami 2 days ago | parent | prev | next [-]

That's exactly what the goal is: That everyone uses American models over Chinese models that will "promote democratic values".

mdhb 2 days ago | parent | next [-]

From a government that has made it extremely fucking clear that they aren’t ACTUALLY interested in the concept of democracy even in the most basic sense.

saubeidl 2 days ago | parent | prev [-]

The ultimate propaganda machine.

somenameforme 2 days ago | parent | prev | next [-]

The idea is to dominate AI in the same way that China dominates manufacturing. Even if things are open source that creates a major dependency, especially when the secret sauce is the training content - which is irreversibly hashed away into the weights.

guappa 2 days ago | parent [-]

I think the only way to dominate AI is to ban the use of any other AI…

kevindamm 2 days ago | parent [-]

There can be infrastructure dominance, too. It's difficult to get accurate figures for data center size across FAANG because each considers those figures to be business secrets but even a rough estimate puts the US data centers ahead of other countries or even regions.. the US has almost half of the world's data centers by count.

Transoceanic fiber runs become a very interesting resource, then.

somenameforme a day ago | parent [-]

In every domain that uses neural networks, there always reaches a point of sharp diminishing returns. You 100x the compute and get a 5% performance boost. And then at some point you 1000x the compute and your performance actually declines due to overfitting.

And I think we can already see this. The gains in LLMs are increasingly marginal. There was a hugeeeeeeee jump going from glorified markov chains to something able to consistently produce viable output, but since then each generation of updates has been less and less recognizable to the point that if somebody had to use an LLM for an hour and guess its 'recency'/version, I suspect the results would be scarcely better than random. That's not to say that newer systems are not improving - they obviously are, but it's harder and harder to recognize those changes without having its immediate predecessor to compare against.

HPsquared 2 days ago | parent | prev | next [-]

They see people using DeepSeek open weights and are like "huh, that could encode the model creators' values in everything they do".

somenameforme 2 days ago | parent [-]

I doubt this has anything to do with 'values' one way or the other. It's just about trying to create dependencies, which can then be exploited by threatening their removal or restriction.

It's also doomed to failure because of how transparent this is, and how abused previous dependencies (like the USD) have been. Every major country will likely slowly move to restrict other major powers' AI systems while implicitly mandating their own.

nicce 2 days ago | parent | prev [-]

Can a model make so sophisticated propaganda or manipulation that most won’t notice it?

pydry 2 days ago | parent | next [-]

Most western news propaganda isnt especially sophisticated and even the internally inconsistent narratives it pushes still end up finding an echo on hacker news.

ChrisRR 2 days ago | parent | prev [-]

Well just look at the existing propaganda machines online and how annoyingly effective they are

WHA8m 2 days ago | parent [-]

Be specific. "Well just look at <general direction>" is a horrible way of discussing.

And about your thought: I disagree. When I look at those online places, I see echo chambers, trolls and a lack of critical thinking (on how to properly discuss a topic). Some parts might be artificially accelerated, but I don't see propaganda couldn't be fought. People are just coasting, lazy, group thinking, being entertained and angry.

ted_dunning 2 days ago | parent [-]

Lack of critical thinking is a key success indicator for propaganda.

Propaganda is less about what it says, but more about how it makes people _feel_.

If you get the feels strong enough, it doesn't matter what you say. The game is over before you start.

cardamomo 2 days ago | parent | prev | next [-]

I wonder how this intersects with their interest in "unbiased" models. Scare quotes because their concept of unbiased is scary.

rtkwe 2 days ago | parent | next [-]

Elon gives an unvarnished look at what they mean by 'unbiased' with respect to models. It's rewriting the training material or adding tool use (searching for Musk's tweets about topics before deciding it's output) to twist the output into ideological alignment.

rayval 2 days ago | parent | prev [-]

"unbiased", in the world of realpolitik, means "biased in a manner to further my agenda and not yours".

HPsquared 2 days ago | parent [-]

See also "fair".

ActorNightly 2 days ago | parent | prev | next [-]

Its all meaningless though.

bigyabai 2 days ago | parent | prev | next [-]

Good to see what? "Encourage" means nothing, every example listed in the document is more exploitative than supportive.

Today, Google and Apple both already sell AI products that technically fall under this definition, and did without government "encouragement" in the mix. There isn't a single actionable thing mentioned that would promote further development of such models.

artninja1988 2 days ago | parent [-]

It's certainly more encouraging than the tone from a few months/ years ago, when there was talk of outright banning open source/ weigh foundational weight models

bigyabai 2 days ago | parent [-]

You literally cannot ban weights. You can try, but you can't. Anyone threatening to do so wasn't doing it credibly.

hopelite 2 days ago | parent | prev | next [-]

It’s primarily motivated by control; similar to how all narcissistic, abusive, controlling, murderous, “dominating” (as the document itself proclaims) people and systems are. That is not motivated by magnanimity and genuine shared interest or focus on precision and accuracy.

The controllers of the whole system want open weights and source to make sure models aren’t going to expose the population to unapproved ideas and allow the spread of unapproved thoughts or allow making unapproved connections or ask unapproved questions without them being suitably countered to keep everyone in line with the system.

jsnider3 2 days ago | parent | prev | next [-]

No, it's bad, since we will soon reach a point where AI models are major security risks and we can't get rid of an AI after we open-source it.

rwmj 2 days ago | parent [-]

"major security risks" as in Terminator style robot overlords, or (to me more likely) they enable people to develop exploits more easily? Anyway I fail to see how it makes much difference if the models are open or closed, since the barrier to entry to creating new models is not that large (as in, any competent large company or nation state can do it easily), and even if they were all closed source, anyone who has the weights can run up as many copies as they want.

shortrounddev2 2 days ago | parent [-]

The risk of AI is that they are used for industrial scale misinformation

rwmj 2 days ago | parent | next [-]

Definitely a risk, and already happening, but I presume mostly closed source AIs are used for this? Like, people using the ChatGPT APIs to generate spam; or Grok just doing its normal thing. Don't see how the open vs closed debate has much to do with it.

patcon 2 days ago | parent | next [-]

You can't see how a hosted private model (that can monitor usage and adapt mechanisms to that) has a different risk profile than an open weight model (that is unmonitorable and becomes more and more runnable on more and more hardware every month)?

One can become more controlled and wrangle in the edge-cases, and the other has exploding edges.

You can have your politics around the value of open source models, but I find it hard to argue that there aren't MUCH higher risks with the lack of containment of open weights models

rwmj 2 days ago | parent [-]

You're making several optimistic assumptions: The first is that closed source companies are interested in controlling the risk of using their technology. This is obviously wrong: Facebook didn't care its main platform enabled literal genocide. xAI doesn't care about the outputs of their model being truthful.

The other assumption is that nefarious actors will care about any of this. They'll use what's available, or make their own models, or maybe even steal models (if China had an incredible AI, don't you think other countries would be trying to steal the weights?). Bad actors don't care about moral positions, strangely enough.

shortrounddev2 2 days ago | parent | prev [-]

Governments are able to regulate companies like OpenAI and impose penalties for allowing their customers to abuse their APIs, but are unable to do so if Russia's Internet Research Agency is running the exact same models on domestic Russian servers to interfere in US elections.

Of course, the US is a captured state now and so the current US Government has no problem with Russian election interference so long as it benefits them

BeFlatXIII 2 days ago | parent | prev [-]

You don't need frontier models to do that. GPT-3 was already good enough.

belter 2 days ago | parent | prev [-]

Only weights that are not Woke according to what was stated. And reduce those weights on the neural net path to the Epstein files please.