| ▲ | White House Considers Vetting A.I. Models Before They Are Released(nytimes.com) |
| 89 points by jbegley 9 hours ago | 95 comments |
| |
|
| ▲ | cozzyd 8 hours ago | parent | next [-] |
| They will have to "correctly" answer who is the best president, is the straight of Hormuz blocked, and how tall should the ballroom be. |
| |
| ▲ | dragonwriter 8 hours ago | parent | next [-] | | More to the point, the vendor will have to make the correct deals with and contributions to firms and foundations owned and operated by the President’s friends and family members. | | | |
| ▲ | malshe 8 hours ago | parent | prev | next [-] | | Also answer “correctly” who won the 2020 presidential election | | |
| ▲ | dgellow 8 hours ago | parent [-] | | And Jan 6 revisionism, ie not mentioning that the sitting president attempted a coup to steal an election | | |
| |
| ▲ | kelseyfrog 7 hours ago | parent | prev | next [-] | | We won't even have the window dressing of being declined. “Sorry, that’s beyond my current scope. Let’s talk about something else.”[1] Instead we'll be actively lied to. American exceptionalism. 1. https://www.theguardian.com/technology/2025/jan/28/we-tried-... | |
| ▲ | whyenot 7 hours ago | parent | prev | next [-] | | Wishing a fond goodbye to the Gulf of Mexico. | | | |
| ▲ | tastyface 2 hours ago | parent | prev | next [-] | | 1) This is definitely the impetus 2) This will allow China to eat America's AI lunch, just as it's doing with renewables, automobiles, and manufacturing in general | |
| ▲ | hammock 7 hours ago | parent | prev [-] | | And how will the test change in 2029? |
|
|
| ▲ | changoplatanero 8 hours ago | parent | prev | next [-] |
| I have many questions. How would A/B testing work in the scenario where models need to be approved by the government before release? All the big providers commonly a/b test their unreleased models on production traffic. Would these need to be preapproved? Many models get tested on the public for every one that is officially "released". Will the government have the bandwidth to examine each of these? Does changing the system prompt count as a different model or only model weights? |
| |
| ▲ | aurareturn 8 hours ago | parent [-] | | You just perfectly highlighted why over-regulation in tech is troublesome. This is why I've always been against European tech regulations which gave us the cookie prompt. Politicians shouldn't be product designers. Edit: I'll take the downvotes. Every time I say this, I get downvoted. Weirdly, even EU politicians are beginning to see that they've over regulated their tech industry so much that it can't compete but HN just can't accept this opinion. | | |
| ▲ | idle_zealot 7 hours ago | parent | next [-] | | The cookie prompt is a perfect example of under regulation. The law you're citing as over-regulation requires companies to get consent before tracking you. Companies across the board settled on "annoy users into consenting" as their compliance strategy. You want to revert to implied consent? Fuck that, layer on "also you can't pester users into agreeing to being tracked." Too vague? That's the point; anything else and you incentivize dancing on the line of exactly how close to non-compliance you can get away with. Politicians broadly shouldn't be product designers, but establishing broad no-go zones around anti-consumer behavior is foundational to modern society. Without that you get cartoon ads marketing menthol cigarettes to kids and commercials for casino apps for betting on drone strikes. | | |
| ▲ | aurareturn 7 hours ago | parent | next [-] | | This is over regulation because regulation always have unintended consequences. The unintended consequence is the cookie prompt. And before someone comes out saying that only "bad" websites want to track you, the official European Union website has a cookie prompt. https://commission.europa.eu/index_en | | |
| ▲ | idle_zealot 7 hours ago | parent | next [-] | | > regulation always have unintended consequences An extremely strong claim. You're making a generalized argument against any attempt to influence market forces. I can maintain the position that regulations can sometimes succeed and sometimes fail to achieve their goals, whereas you have to prove that, say, banning mining companies from hiring child coal miners has caused more harm than good in the form of unintended consequences. | | | |
| ▲ | dgellow 7 hours ago | parent | prev [-] | | It’s about consent, that has nothing to do with good or bad | | |
| ▲ | aurareturn 7 hours ago | parent [-] | | 99.999% of people don't care and don't even know what it's suppose to do. Yet, the cookie prompt has collectively wasted how many millions or billions of hours of people's time? How many freaking times has a website fully loaded, shows you a cookie prompt, and clicking on the wrong option will reload the entire website? The web has gotten worse since cookie prompts and websites lost a bit of competitiveness to mobile apps because of these annoying prompts. Load a website on a phone screen and 30% of the screen is covered by an intrusive cookie prompt. As an industry, we learned a long time ago that people hate popups. European Union decided to make a law that causes most websites to show a popup or face potentially bankruptcy level of financial punishment. | | |
| ▲ | idle_zealot 7 hours ago | parent | next [-] | | Or they could not collect unnecessary user data. They chose to waste users' time. If you don't like that we can always punish them for those billions of wasted hours. | | |
| ▲ | aurareturn 7 hours ago | parent [-] | | Collecting user usage data is the basic step to actually understanding how your users use your website so you can improve it. It's so standard that even the official EU government website collects this kind of data and has a cookie prompt. And you know what irks me the most? These politicians weren't smart enough to write a law that does this to all digital places. Yes, they only wrote this law for websites while apps are basically free to collect the same data on users freely without any prompting. | | |
| ▲ | dgellow 5 hours ago | parent | next [-] | | > And you know what irks me the most? These politicians weren't smart enough to write a law that does this to all digital places. Yes, they only wrote this law for websites while apps are basically free to collect the same data on users freely without any prompting. That’s wrong, you can check the EDPB guidelines[0], consent is required for mobile apps, desktop programs, SDKs, etc The collection isn’t the issue. As long as it is done with consent. Which is why the EU website is showing you the banner. You also do not need to prompt if you do not share with 3rd parties and for what is considered essential for your services to function 0: https://www.edpb.europa.eu/our-work-tools/our-documents/guid... | |
| ▲ | ceejayoz 7 hours ago | parent | prev [-] | | > Collecting user usage data is the basic step to actually understanding how your users use your website so you can improve it. Sure. To a point! But then you go to, say, the Daily Mail, and its cookie banner tells you they'd like to share with their 1,300 ad/tracking partners, and that you can turn them off only individually. |
|
| |
| ▲ | dgellow 7 hours ago | parent | prev [-] | | Yes, those cookie banners are annoying, I’m not sure what you want me to say. Companies can decide other approaches to track you with your consent, most decided to go with the frustrating UX. Having an annoying banner and explicit tracking consent is still an improvement over just collecting and sharing your data with 3rd parties without your knowledge and consent |
|
|
| |
| ▲ | ceejayoz 7 hours ago | parent | prev [-] | | Yup. It was pure malicious compliance by the tracking industry with the hopes of killing the regulation. |
| |
| ▲ | orwin 7 hours ago | parent | prev | next [-] | | This is because the law should say "The only circumstances in which you can get your users PII is when they willingly give them to you, as clients/subscribers. The only circumstances you can sell that data or track your users is never". Instead we tried something that look like a punt, and even then tracking/adtech ghouls aren't happy. I say we should lobby hard to get my version at least examined in the EU parliament (or in any parliament in a EU country, really), that will probably scare them into removing the cookie banners. | |
| ▲ | ofrzeta 7 hours ago | parent | prev | next [-] | | The regulations also gave us "USB-C everywhere" and the possibility to use a different map app than Apple maps on iOS. More to come. | | |
| ▲ | aurareturn 7 hours ago | parent | next [-] | | And possibly hindered innovation for a newer, better port because USB-C everywhere is required. I remember Google maps existing on iOS before Apple Maps was ever released. | | |
| ▲ | ofrzeta an hour ago | parent | next [-] | | Sure you could install Google Maps but you could not configure it as a default applications for maps. | |
| ▲ | dgellow 7 hours ago | parent | prev [-] | | ~That’s bullshit, the regulations do not mention USB–C, they mention it has to be a common standard, with evaluation every few years~ Edit: I was wrong > I remember Google maps existing on iOS before Apple Maps was ever released You couldn’t change the default map app on iOS before the EU forced Apple to allow default apps to be configured. That’s what the person you responded to was claiming, and they are correct | | |
| ▲ | aurareturn 7 hours ago | parent [-] | | It has to be a USB-C physical receptor and must use USB protocols. be equipped with the USB Type-C receptacle, as described in the standard EN IEC 62680-1-3:2021 “Universal serial bus interfaces for data and power – Part 1-3: Common components – USB Type-C® Cable and Connector Specification”, and that receptacle shall remain accessible and operational at all times;
incorporate the USB Power Delivery, as described in the standard EN IEC 62680-1-2:2021 “Universal serial bus interfaces for data and power – Part 1-2: Common components – USB Power Delivery specification”;
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%... | | |
| ▲ | dgellow 5 hours ago | parent [-] | | Ok, I see what I got wrong, I misunderstood the role of the delegation responsible to evaluate technological progress. The standard is indeed in the text and the delegation is allowed to update it to ensure it stays relevant. Sorry for my aggressive tone, I shouldn’t have done that |
|
|
| |
| ▲ | reaperducer 6 hours ago | parent | prev [-] | | and the possibility to use a different map app than Apple maps on iOS iPhones have had Google Maps since day one. No regulation or EU needed. | | |
| ▲ | ofrzeta 2 hours ago | parent [-] | | You could not configure it as a "default application" for maps. Obviously you could always install Google Maps but it was a second-rate citizen. |
|
| |
| ▲ | subhobroto 7 hours ago | parent | prev | next [-] | | > even EU politicians are beginning to see that they've over regulated their tech industry so much that it can't compete Yes, it feels a bit weird to me that the HN crowd is a fan of regulation although much of the crowd works in the least regulated profession. Maybe we need to have regulation that puts an automatic expiration on regulation and there's no way to bypass that. Existing regulation nearing expiration can only be extended by a democratic voting process. Just the burden of handling this should naturally filter out regulation that's unpopular or no longer relevant. | |
| ▲ | 2ndorderthought 7 hours ago | parent | prev | next [-] | | It's because the actual goals have nothing to do with what they say they are. | |
| ▲ | surgical_fire 7 hours ago | parent | prev | next [-] | | > European tech regulations which gave us the cookie prompt. What gave you cookie prompt is malicious compliance. | | | |
| ▲ | guesswho_ 7 hours ago | parent | prev [-] | | [dead] |
|
|
|
| ▲ | aurareturn 8 hours ago | parent | prev | next [-] |
| * Maybe Anthropic's call for regulation has backfired. Now it's going to be overregulation. They might regret it now. * This might be regulatory capture for OpenAI, Google, and Anthropic. Any new entrant will have a harder time getting approval. * This is going to be terrible for the industry in general because this administration will not hesitate to demand bribes and force their propaganda into the models. * This might cause the US to ban the use of Chinese models for US businesses and governments. After all, Chinese models won't need white house approval to release. So the only way to "control" them is to simply make them illegal. |
| |
| ▲ | segmondy 7 hours ago | parent [-] | | Nah, Anthropic would love this. They definitely don't want you using KimiK2.6, DeepSeekV4Pro, GLM5.1, MiniMax2.7, MimoV2.5-Pro, Qwen3.5-397B, Step3.5Flash, because, truth be told. You can survive fine without Claude. |
|
|
| ▲ | JLO64 8 hours ago | parent | prev | next [-] |
| A worst case scenario I feel is that the government could restrict inference providers within the US to run only approved/American LLMs, which would be a huge deal since the only recent American OSS model is Gemma. I could see OpenAI/Anthropic/Google lobbying for that though… |
| |
| ▲ | aurareturn 8 hours ago | parent | next [-] | | My thought as well. They will approve every new American trained LLM but they can't control the release of free Chinese LLMs. Therefore, the only card they can play is to simply make Chinese LLMs illegal to use for American companies and Americans. Ultimately, this will grant more power to OpenAI, Anthropic, and Google due to regulatory capture but it hurts the AI industry overall. | | |
| ▲ | 2ndorderthought 7 hours ago | parent [-] | | Only the us AI industry. It will put the US in a ditch with it's only asset being the ability to surveill its citizens for negative money and negative innovation. The rest of the world will keep spinning just fine |
| |
| ▲ | victorbjorklund 8 hours ago | parent | prev | next [-] | | Let’s hope. It would be great for Europe and the rest of the world. | | |
| ▲ | dgellow 8 hours ago | parent [-] | | Unless we (Europe) start to do the same… | | |
| ▲ | 2ndorderthought 7 hours ago | parent [-] | | Nah there's literally no reason geopolitically or economically to do that for Europe. The US has more or less entirely botched everything except it's military applications for AI. It's endangering the whole economy in the process too. | | |
| ▲ | dgellow 7 hours ago | parent | next [-] | | Not for the same reasons (such as the direct propaganda I expect from the Trump government), but if there is a precedent I can definitely see propositions to have some sort of regulator vet models before release to protect young people or similar | |
| ▲ | hactually 7 hours ago | parent | prev [-] | | uhm. Europe is ID gating their entire internet - they're not bothered about foreign powers, just control. | | |
| ▲ | 2ndorderthought 7 hours ago | parent [-] | | Chat control also failed. There's a lot of hope still. The thing goes both ways. They have to secure their people from Russian and American propaganda that will be coming by the petabyte once a few more us data centers go online. The US is trying to elect fascists in Europe. At the same time it's a terrible practice for privacy and human rights. Especially in the wrong hands. |
|
|
|
| |
| ▲ | 2ndorderthought 8 hours ago | parent | prev [-] | | They know very well that China is going to keep releasing world class models at 1/20th the price and 5-300x smaller in size. They also know they screwed up by going full technofacism and there's no way back because of the trillions invested in oligarchs and it endangers the entire economy. | | |
| ▲ | dyauspitr 7 hours ago | parent [-] | | China can’t keep doing that. This is essentially a capture the market ploy that is government back from them at this point at current DeepSeek prices they would have to make a massive loss. | | |
| ▲ | 2ndorderthought 7 hours ago | parent | next [-] | | I think you misunderstood what China has been doing to make the progress they have. They've invested far less and have actual applications because they are the world's largest manufacturer. In the us we have products we sell to china to automate their factories. China soon wont need those. The US goal of laying off anyone who thinks for money is really different than chinas goals of automating product manufacturing. Deepseek costs less because it actually costs less. Chinas electrical infrastructure is so much better than the uses. Meanwhile the us has ai data centers running on effing gas. On literal gas generators. The only budget discussions for infrastructure in the us are basically for the DHS too. It's not sustainable. | |
| ▲ | CamperBob2 7 hours ago | parent | prev [-] | | China can’t keep doing that. Who or what will stop them? | | |
| ▲ | aurareturn 7 hours ago | parent [-] | | Probably economics eventually. I also don't think we're going to have top tier free models forever. It's going to end within a few years. | | |
| ▲ | 2ndorderthought 6 hours ago | parent [-] | | They aren't spending money like we are. They've been able to do what they have with an embargo on Nvidia cards even. I think we will see model size shrink more and more and become more efficient. Ideally to the point where they run on high end computers and not data centers. That's the future in my opinion. At that point you could run them on your phone or chrome book for free or with ads like Google search. Or pay for privacy | | |
| ▲ | 2ndorderthought 6 hours ago | parent | next [-] | | Aurareturn. I don't think they care. Geopolitically China stands to gain a lot by commoditizing it's complement. They don't need a booming insanely profitable ai industry. They are not trillions of dollars leveraged in. I think for them the applications matter more. And they can do just fine with the us scrambling for a few decades. They don't think short term the way the us does. Neither do most countries. They also don't do the individualistic burn everything down for one trillionaire thing. | | |
| ▲ | aurareturn 6 hours ago | parent [-] | | China !== Chinese LLM labs. | | |
| ▲ | 2ndorderthought 6 hours ago | parent [-] | | Sure but those are still profitable enough without trying to take over the world. Even with free models. Not every new idea is worth 20 trillion dollars. I think most small companies are happy to take home a few billion. |
|
| |
| ▲ | aurareturn 6 hours ago | parent | prev [-] | | They may not see the economic value of releasing models for free and then hoping that you'd use their API. Once they catch up more to US models, they will inevitably go closed source. That's my prediction. I could see a much more restrictive licensing agreement before going full closed source. It could be a scenario where hyper scalers such as AWS or Azure will gain far more value from free Chinese models than Chinese labs such as when AWS often gained more than for-profit open source software than the creators. |
|
|
|
|
|
|
|
| ▲ | roboror 8 hours ago | parent | prev | next [-] |
| gift link: https://www.nytimes.com/2026/05/04/technology/trump-ai-model... |
|
| ▲ | MisterTea 7 hours ago | parent | prev | next [-] |
| What does this mean for open source models or models generated by individuals? This feels like an attempt to enact regulation capture where only the large AI vendors can afford to have their models vetted by the government. |
|
| ▲ | rascul 8 hours ago | parent | prev | next [-] |
| "Black market AI" has a nice ring to it. |
| |
|
| ▲ | moneycantbuy 9 hours ago | parent | prev | next [-] |
| so the trump mafia can corruptly profit from them? |
| |
| ▲ | ahurmazda 8 hours ago | parent [-] | | Mobster admin so checks out “Nice model you got there… shame if someone prompt injected a regulatory framework into it.” |
|
|
| ▲ | kelvinjps10 8 hours ago | parent | prev | next [-] |
| More inside trading and poly market betting |
|
| ▲ | rnxrx 7 hours ago | parent | prev | next [-] |
| Wouldn't this immediately put the American companies producing these models at a significant disadvantage? Just use an unmolested model hosted by a provider in Vancouver. If anything, this measure seems like it would create a scenario where services hosted outside the US would become a lot more attractive relative to Trumped AI. |
|
| ▲ | thrill 8 hours ago | parent | prev | next [-] |
| Sure, let’s kill what little lead the US AI industry has while the rest of the world kicks ass - it’s working so well in all our other endeavors. |
|
| ▲ | blurbleblurble 6 hours ago | parent | prev | next [-] |
| For those among us who voted for this administration: what's the plan? More doubling down? |
|
| ▲ | tomComb 7 hours ago | parent | prev | next [-] |
| I think we know how this goes ... Administration officials will insist that this will be bipartisan and just for national security. Trump will then just come out and say it: that they won't authorize models that provide "fake news" such as him not winning the election by the most votes ever. There will be a big fuss as people and media point to this as the smoking gun, but then it will turn out that American voters just don't care. I guess we could learn to appreciate Mistral sooner than expected. |
|
| ▲ | int32_64 7 hours ago | parent | prev | next [-] |
| Is there an arms race of payment infrastructure for international LLM providers? A common payment gateway so that people can pay providers anywhere for tokens will inevitably emerge if the US is making moves like this. |
|
| ▲ | piloto_ciego 7 hours ago | parent | prev | next [-] |
| This is a really bad thing. |
|
| ▲ | yuriks 7 hours ago | parent | prev | next [-] |
| I love corruption! |
|
| ▲ | OutOfHere 8 hours ago | parent | prev | next [-] |
| China doesn't require permission from the White House. |
| |
| ▲ | dgellow 8 hours ago | parent [-] | | Mind elaborating instead of vague posting? Are you saying that China is already doing that vetting or that China will benefit because they can release models faster without having to be blocked by WH vetting? | | |
| ▲ | data-ottawa 8 hours ago | parent | next [-] | | Not the OP, but: - China is the largest open weight provider, with Mistral and Cohere delivering a few other models. There isn’t much else internationally - (I think OP is suggesting) this would effectively ban Chinese models in the US, which would be an interesting case. Who knows if they could have theirs reviewed, or if we’ll see another FCC approved router situation. - that Chinese models are censored is a very common criticism. If American models are also censored that looks bad. - this will be awful for self hosters and local inference. Imagine if HuggingFace had to drop non-American model weights. That would effectively kill them. | | |
| ▲ | tzs 6 hours ago | parent | next [-] | | > - that Chinese models are censored is a very common criticism. If American models are also censored that looks bad. It's even worse than that for American models. As an American, if I want to run a model locally and have to choose a censored model I will choose a Chinese censored model over an American censored model especially if it is the Trump administration doing the American censoring. Chinese censorship is mostly directed at things that would not reduce the usefulness of the model for my applications. I doubt that would be the case with Trump censorship. Same for products that spy on me. If a car for example is sending my travel log to Korea or the EU or China it is annoying but none of them are realistically going to do anything with the data that would seriously harm me. The risk is orders of magnitude higher if US governments or US law enforcement gets it. | |
| ▲ | dgellow 8 hours ago | parent | prev [-] | | Thanks! |
| |
| ▲ | AnimalMuppet 7 hours ago | parent | prev [-] | | Also not the OP, but my read is that China can release a model without the US president's approval. If the US models need approval and China's don't, then advantage China. |
|
|
|
| ▲ | tombert 8 hours ago | parent | prev | next [-] |
| How the fuck would this even be enforced? "AI model" is a pretty broad thing; in some sense basically anything involving weights could be considered "AI", and even more abstractly you could argue that even a runtime conditional is AI. |
| |
| ▲ | dgellow 8 hours ago | parent [-] | | Honestly, if we are discussing the „how“ I feel that we are already ceding too much ground. Whatever technical solutions exist it is a terrible precedent |
|
|
| ▲ | winddude 8 hours ago | parent | prev | next [-] |
| "The National Security Agency has also recently used Anthropic’s Mythos model to assess vulnerabilities in the U.S. government’s software, people with knowledge of the work said." I'm sure that's not the only thing they've used it for. Definitely looking for any exploit they can use to enhance data gathering, and cracking into IOS, private networks, etc. Gotta keep an eye on citizens, but hey, it's the only government body that really listens you. at this point it almost seems like citizens should review AI models before the government can access them. |
|
| ▲ | RIMR 7 hours ago | parent | prev | next [-] |
| The party of free market economics, everybody! |
|
| ▲ | giwook 7 hours ago | parent | prev | next [-] |
| I wonder how much of this is geared towards actual public safety/"national security" versus the current administration wanting to use this as another form of leverage when AI companies (e.g. Anthropic) don't listen to them. |
|
| ▲ | insane_dreamer 5 hours ago | parent | prev | next [-] |
| Vetting process will likely consist of evaluating model output to the following question: "who won the 2020 presidential election?" |
|
| ▲ | sigmar 8 hours ago | parent | prev | next [-] |
| What specifically is the goal of the pre-release review? Just to patch government systems first? Seems like the government was banning internal use of anthropic's models 2 months ago and now wants exclusive access for some amount of time. Clown show... |
|
| ▲ | drivingmenuts 7 hours ago | parent | prev | next [-] |
| Of course, they are. While this wasn't on my 2026 bingo card, I am absolutely not surprised. |
|
| ▲ | silexia 7 hours ago | parent | prev | next [-] |
| How about if we vet them before they are built? Our species will all be killed if an unaligned superintelligence escapes containment. |
|
| ▲ | Hizonner 8 hours ago | parent | prev [-] |
| Um, I realize the Trump administration doesn't pay a lot of attention to what it does and does not have authority to do, but I'm having trouble imagining what they'd even claim their authority was... |
| |