Remix.run Logo
epicprogrammer 6 hours ago

It's easy to frame this purely as an ethical battle, but there's a massive financial reality here. Training frontier models requires astronomical amounts of capital, and the DOD is one of the few entities with deep enough pockets to fund the next generation of compute. Anthropic turning down this Pentagon contract over safety disagreements is a huge gamble. They are essentially betting that the enterprise market will reward their 'Constitutional AI' approach enough to offset the billions OpenAI will now make from government defense contracts. OpenAI wants the DOD money while maintaining a consumer-friendly PR sheen; Amodei is just pointing out that they can't have it both ways.

aardvarkr 6 hours ago | parent | next [-]

It’s a $200M contract. That’s not nothing but it’s definitely not such a huge sum for these companies at their scale when they’re spending billions on infrastructure.

I’m sure anthropic has signed up more revenue this week in response to this debacle to cover it. Where they’re actually screwed is if the gov follows through and declare anthropic a supply chain risk.

DesaiAshu 6 hours ago | parent | next [-]

It's not "just" a $200m contract, it's the start of a lucrative relationship

1. Stargate seemed to require a dedicated press conference by the President to achieve funding targets. Why risk that level of politicization if it didn't?

2. Greg Brockman donated $25mil to Trump MAGA Super PAC last year. Why risk so much political backlash for a low leverage return of $200m on $25m spent?

3. During WW2, military spend shot from 2% to 40% of GDP. The administration is requesting $1.5T military budget for FY2027, up from $0.8T for FY2025. They have made clear in the past 2 months that they plan to use it and are not stopping anytime soon

If you believe "software eats the world" it is reasonable to expect the share of total military spend to be captured by software companies to increase dramatically over the next decade. $100B (10% of capture) is a reasonable possibility for domestic military AI TAM in FY2027 if the spending increase is approved (so far, Republicans have not broken rank with the administration on any meaningful policy)

If US military actions continue to accelerate, other countries will also ratchet up military spend - largely on nuclear arsenals and AI drones (France already announced increase of their arsenal). This further increases the addressable TAM

Given the competition and lack of moat in the consumer/enterprise markets, I am not sure that there is a viable path for OpenAI to cover it's losses and fund it's infrastructure ambitions without becoming the preferred AI vendor for a rapidly increasing military budget. The devices bet seems to be the most practical alternative, but there is far more competition both domestically (Apple, Google, Motorola) and globally (Xiaomi, Samsung, Huawei) than there is for military AI

Having run an unprofitable P&L for a decade, I can confidently state that a healthy balance sheet is the only way to maintain and defend one's core values and principles. As the "alignment" folks on the AI industry are likely to learn - the road to hell (aka a heavily militarized world) is oft paved with the best intentions

solenoid0937 5 hours ago | parent [-]

First, I have to say I loved your thoughtful & detailed comment. You have clearly considered this from the financial side; let me add some color from the perspective of someone working with frontier researchers.

> As the "alignment" folks on the AI industry are likely to learn

I will push back here. Dario & co are not starry-eyed naive idealists as implied. This is a calculated decision to maximize their goal (safe AGI/ASI.)

You have the right philosophy on the balance sheet side of things, but what you're missing is that researchers are more valuable than any military spend or any datacenter.

It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.

There is no substitute for sheer IQ:

- You can't buy it (god knows Zuck has tried, and failed to earn their respect).

- You can't build it (yet.)

- And collaboration amongst less intelligent people does not reliably achieve the requisite "Eureka" realizations.

Had Anthropic gone forth with the DoD contract, they would have lost this top crowd, crippling the firm. On the other hand, by rejecting the contract, Anthropic's recruiting just got much easier (and OAI's much harder).

Generally, the defense crowd have a somewhat inflated sense of self worth. Yes, there's a lot of money, but very few highly intelligent people want to work for them. (Almost no top talent wants to work for Palantir, despite the pay.) So, naturally:

- If OpenAI becomes a glorified military contractor, they will bleed talent.

- Top talent's low trust in the government means Manhattan Project-style collaborations are dead in the water.

As such, AGI will likely emerge from a private enterprise effort that is not heavily militarized.

Finally, the Anthropic restrictions will last, what, 2.5 more years? They are being locked out of a narrow subset of usecases (DoD contract work only - vendors can still use it for all other work - Hegseth's reading of SCR is incorrect) and have farmed massive reputation gains for both top talent and the next administration.

vhiremath4 3 hours ago | parent | next [-]

This is an interesting perspective. What happens if there is a large global war? Do researchers who were previously against working with the DoD end up flipping out of duty? Does the war budget go up? Does the DoD decide to lift any ban on Anthropic for the sake of getting the best model and does Anthropic warm its stance on not working with autonomous weapons systems?

I don’t know the answers to these questions, but if the answer is “yes” to at least 1 or 2, then I think the equation flips quite a bit. This is what I’m seeing in the world right now, and it’s disconcerting:

1. Ukraine and Russia have been in a skirmish that has been drawn out much longer than I would guess most people would have guessed. This has created a divide in political allegiance within the United States and Europe.

2. We captured the leader of Venezuela. Cuba is now scared they are next.

3. We just bombed Iran and killed their supreme leader.

4. China and the US are, of course, in a massive economic race for world power supremacy. The tensions have been steadily rising, and they are now feeling the pressure of oil exports from Iran grinding to a halt.

5. The past couple days Macron has been trying to quell tension between Israel and Lebanon.

I really do not hope we are not headed into war. I hope the fact that we all have nukes and rely on each others’ supply chains deters one. But man does it feel like the odds are increasing in favor of one, and man does that seem to throw a wrench in this whole thing with Anthropic vs. OpenAI.

guitheengineer 2 hours ago | parent | prev | next [-]

that is considering if there will be elections, which many people don't believe it's the case.

reminder that trump has been flirting with just continuing in power (2028 hats and talks about a third term) and is responsible for trying a coup last time he lost.

personally I think there's a possibility where he'll just declare martial law and stay in power at the end of his term.

nickysielicki 2 hours ago | parent | prev [-]

> researchers are more valuable than any military spend or any datacenter. It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.

This is a massive cope imo. The reason that the AI industry is so incestuous is just because there are only a handful of frontier labs with the compute/capital to run large training clusters.

Most of the improvements that we’ve seen in the past 3 years are due to significantly better hardware and software, just boring and straightforward engineering work, not brilliant model architecture improvements. We are running transformers from 2017. The brilliant researchers at the frontier labs have not produced a successor architecture in nearly a decade of trying. That’s not what winning on research looks like.

Have there been some step-change improvements? Sure. But by far the biggest improvement can be attributed to training bigger models on more badass hardware, and hardware availability to serve it cheaply. To act like the DoD isn’t going to be able to stand up pytorch or vllm and get a decent result is hilarious: the reason you use slurm and MPI and openshmem is because national labs and DoD were using it first. NCCL is just gpu accelerated scope-reduced MPI. nvshmem is just gpu accelerated scope-reduced openshmem.

If anything, DoD doesn’t have the inference throughput requirements that the unicorns have and might just be able to immediately outperform them by training a massive dense model without optimizing for time to first token or throughput. They don’t have to worry about if the $/1M tokens makes it economically feasible to serve, which is a primary consideration of the unicorns today when they’re choosing their parameter counts. They can just rate limit the endpoint and share it, with a 2 hour queue time.

The government invented HPC, it’s their world and you’re just playing in it.

> Generally, the defense crowd have a somewhat inflated sense of self worth.

/eyeroll but nobody can do what you do!

ExoticPearTree 6 hours ago | parent | prev | next [-]

That is with the Pentagon directly only. Now they will lose much more because no defense contractor, subcontractor and so on can use them for anything defense related (even if they use the model to invent a new type of screw, if that screw is going to be used in anything military).

So yeah, they bet a whole lot on “look at us, we have morals”.

hedora 5 hours ago | parent | next [-]

There's no legal basis for blocking defense contractors from using them. Trump's claiming he can do so, but the law doesn't back him up. He'll lose in any fair court, or any corrupt court that values billionaire interests over virtue signaling to the orange one (like the Supreme Court).

Also, they got a huge PR win, and jumped to #1 on the Apple App Store. Consumer market share is going to decide which of the AI companies is the market leader, not fickle government contracts.

b112 4 hours ago | parent | next [-]

Consumer market share? Absolutely not.

If you look at what generates cash, it's corp to corp. That's across most industries. While there are markets that are consumer mostly, LLMs have immense and enormous business facing revenue potential. The consumer market is a gnat in comparison.

ExoticPearTree 4 hours ago | parent | prev [-]

There are always Executive Orders that can enforce that. It is not like in the movies where they will sort stuff out in 2 weeks in a single trial. It is going to take years, and we'll see if Anthropic survives that.

sixothree 4 hours ago | parent [-]

I'm guessing they believe they will be around longer than this administration.

jitl 4 hours ago | parent | prev [-]

their revenue went up 4 billion in the week since this story started.

3 hours ago | parent [-]
[deleted]
fwipsy 6 hours ago | parent | prev [-]

I think the point is that there's potentially a lot more than $200m in defense dollars at stake here, in the future.

tdeck 5 hours ago | parent | prev | next [-]

> It's easy to frame this purely as an ethical battle, but there's a massive financial reality here.

As opposed to all those famous ethical battles where there's nothing in it for you to do the wrong thing?

toraway 3 hours ago | parent [-]

Based on OP's comment history, 50/50 chance AI wrote that...

dev_l1x_be 3 hours ago | parent | prev | next [-]

Are you arguing against free market capitalism in favor of fascism? If OpenAI needs billions of taxpayers money to survive then should that project exist? Why?

Paddyz 4 hours ago | parent | prev [-]

[dead]