| ▲ | jonplackett 2 days ago |
| How can this work with their main goal of assuring American superiority? If it’s open weights anyone else can use it too. |
|
| ▲ | alganet 2 days ago | parent | next [-] |
| It doesn't say anything about open training corpus of data. The USA supposedly have the most data in the world. Companies cannot (in theory) train on integrated sets of information. USA and China to some extent, can train on large amounts of information that is not public. USA in particular has been known for keeping a vast repository of metadata (data about data) about all sorts of things. This data is very refined and organized (PRISM, etc). This allows training for purposes that might not be obvious when observing the open weights or the source of the inference engine. It is a double-edged sword though. If anyone is able to identify such non-obvious training inserts and extract information about them or prove they were maliciously placed, it could backfire tremendously. |
| |
| ▲ | vharuck 2 days ago | parent [-] | | So DOGE might not be consolidating and linking data just for ICE, but for providing to companies as a training corpus? In normal times, I'd laugh that off as a paranoiac fever dream. | | |
| ▲ | dudeinjapan 2 days ago | parent | next [-] | | If AI were trained on troves of personal info like SSNs, emails, phones then the leakage would be easily discovered and the model would be worthless for any commercial/mass-consumption purpose. (This doesnt rule out a PRISM-AI for NSA purposes of course.) | | |
| ▲ | alganet a day ago | parent [-] | | The way you describe it make PRISM sound like a contact book. I think it more like unwilling facebook. |
| |
| ▲ | alganet a day ago | parent | prev [-] | | Companies can change hands easier than governments. I would assume the US isn't sharing anything exclusive with private commercial entities. Doing so would be a mistake in my opinion. |
|
|
|
| ▲ | sunaookami 2 days ago | parent | prev | next [-] |
| That's exactly what the goal is: That everyone uses American models over Chinese models that will "promote democratic values". |
| |
| ▲ | mdhb 2 days ago | parent | next [-] | | From a government that has made it extremely fucking clear that they aren’t ACTUALLY interested in the concept of democracy even in the most basic sense. | |
| ▲ | saubeidl 2 days ago | parent | prev [-] | | The ultimate propaganda machine. |
|
|
| ▲ | somenameforme 2 days ago | parent | prev | next [-] |
| The idea is to dominate AI in the same way that China dominates manufacturing. Even if things are open source that creates a major dependency, especially when the secret sauce is the training content - which is irreversibly hashed away into the weights. |
| |
| ▲ | guappa 2 days ago | parent [-] | | I think the only way to dominate AI is to ban the use of any other AI… | | |
| ▲ | kevindamm 2 days ago | parent [-] | | There can be infrastructure dominance, too. It's difficult to get accurate figures for data center size across FAANG because each considers those figures to be business secrets but even a rough estimate puts the US data centers ahead of other countries or even regions.. the US has almost half of the world's data centers by count. Transoceanic fiber runs become a very interesting resource, then. | | |
| ▲ | somenameforme a day ago | parent [-] | | In every domain that uses neural networks, there always reaches a point of sharp diminishing returns. You 100x the compute and get a 5% performance boost. And then at some point you 1000x the compute and your performance actually declines due to overfitting. And I think we can already see this. The gains in LLMs are increasingly marginal. There was a hugeeeeeeee jump going from glorified markov chains to something able to consistently produce viable output, but since then each generation of updates has been less and less recognizable to the point that if somebody had to use an LLM for an hour and guess its 'recency'/version, I suspect the results would be scarcely better than random. That's not to say that newer systems are not improving - they obviously are, but it's harder and harder to recognize those changes without having its immediate predecessor to compare against. |
|
|
|
|
| ▲ | HPsquared 2 days ago | parent | prev | next [-] |
| They see people using DeepSeek open weights and are like "huh, that could encode the model creators' values in everything they do". |
| |
| ▲ | somenameforme 2 days ago | parent [-] | | I doubt this has anything to do with 'values' one way or the other. It's just about trying to create dependencies, which can then be exploited by threatening their removal or restriction. It's also doomed to failure because of how transparent this is, and how abused previous dependencies (like the USD) have been. Every major country will likely slowly move to restrict other major powers' AI systems while implicitly mandating their own. |
|
|
| ▲ | nicce 2 days ago | parent | prev [-] |
| Can a model make so sophisticated propaganda or manipulation that most won’t notice it? |
| |
| ▲ | pydry 2 days ago | parent | next [-] | | Most western news propaganda isnt especially sophisticated and even the internally inconsistent narratives it pushes still end up finding an echo on hacker news. | |
| ▲ | ChrisRR 2 days ago | parent | prev [-] | | Well just look at the existing propaganda machines online and how annoyingly effective they are | | |
| ▲ | WHA8m 2 days ago | parent [-] | | Be specific. "Well just look at <general direction>" is a horrible way of discussing. And about your thought: I disagree. When I look at those online places, I see echo chambers, trolls and a lack of critical thinking (on how to properly discuss a topic). Some parts might be artificially accelerated, but I don't see propaganda couldn't be fought. People are just coasting, lazy, group thinking, being entertained and angry. | | |
| ▲ | ted_dunning 2 days ago | parent [-] | | Lack of critical thinking is a key success indicator for propaganda. Propaganda is less about what it says, but more about how it makes people _feel_. If you get the feels strong enough, it doesn't matter what you say. The game is over before you start. |
|
|
|