Remix.run Logo
cube00 4 hours ago

From that same X thread: Our agreement with the Department of War upholds our redlines [1]

OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m

[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...

Nevermark 3 hours ago | parent | next [-]

> more stringent safeguards than previous agreements, including Anthropic's.

Except they are not "more stringent".

Sam Altman is being brazen to say that.

In their own agreement as Altman relays:

> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control

> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing

> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives

> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.

Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.

In other words, no OpenAI restriction at all.

That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.

(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)

clhodapp 2 hours ago | parent | next [-]

Yep. It's the difference between "Don't do these things, regardless of what the law says." and "Do whatever you want, but please follow your own laws while you do it".

As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"

qmarchi 2 hours ago | parent | prev | next [-]

Easy way to summarize it: "You're not allowed to do these things, except for all of the laws that allow you to do these things."

dwallin 2 hours ago | parent | next [-]

It’s a non-clause that is written to sound like they are doing something to prevent these uses when they aren’t. “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things. Plus the administration itself gets to decide if it meets legal use.

hn_throwaway_99 an hour ago | parent | prev | next [-]

> except for all of the laws that allow you to do these things.

It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.

EGreg 2 hours ago | parent | prev [-]

Let me clear it up

The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.

Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.

I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)

vlovich123 2 hours ago | parent [-]

Sure, but OpenAI is also being disingenuous here pretending they’re operating under the same principles Anthropic is. It’s not and the things they’re comfortable with doing Anthropic said they’re not

pear01 an hour ago | parent | prev | next [-]

Brings to mind the infamous line from Nixon:

"When the president does it, that means it is not illegal".

This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.

aardvarkr 39 minutes ago | parent | prev | next [-]

This is the same government caught spying on its citizens by Snowden so I don’t trust them at all.

spiderice 38 minutes ago | parent | prev | next [-]

That seems exactly what it should be. The United States military should be able to do what the law allows. If we don't think they should be allowed to do something, we should pass laws. Not rely on the goodness of Sam Altman.

stingraycharles 2 hours ago | parent | prev [-]

This implies that OpenAI must build and release and maintain a model without any safeguards, which is probably the big win and maybe something Anthropic never wants to do.

jacquesm 2 hours ago | parent [-]

I don't think that is the correct conclusion.

But they won't be releasing it, they will be leasing it to DOJ and all their other customers will get the safeguarded model.

AlexVranas 3 hours ago | parent | prev | next [-]

OpenAI is playing games.

When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."

When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."

That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.

nkassis 3 hours ago | parent | prev | next [-]

OpenAI's post about their contract has the "redlines" described and they don't match what Anthropic wanted. (even if the text tries to imply they do)

https://openai.com/index/our-agreement-with-the-department-o...

sowbug 3 hours ago | parent [-]

This is a good comment detailing the differences: https://news.ycombinator.com/item?id=47200771

Wowfunhappy 3 hours ago | parent | prev | next [-]

> However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

The current administration is so incompetent that I find this perfectly believable.

I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.

I don't know if that's actually what happened here, I just find it plausible.

el_benhameen 30 minutes ago | parent | next [-]

Absolutely incompetent, but I don’t think that’s the cause here. I think Anthropic’s sin was publicly challenging the administration. They’re huge on optics. You can get away with anything as long as you praise and bow in public.

2 hours ago | parent | prev | next [-]
[deleted]
randall 3 hours ago | parent | prev [-]

same. this is about losing a negotiation and saving face / exacting revenge.

jellyroll42 2 hours ago | parent | prev | next [-]

Sam Altman has no scruples. Dark Triad personality. No reason to believe anything he says.

jacquesm 2 hours ago | parent [-]

The same goes for anybody still working at OpenAI past Monday morning 9 am.

Jeremy1026 2 hours ago | parent [-]

People's need for food and shelter doesn't go away because their employer is unethical.

jacquesm 2 hours ago | parent | next [-]

There are many employers. OpenAI employees that quit on account of this will be in high demand at the other AI companies, especially the ones that don't bend over in 30 seconds when Uncle Donald comes calling.

scottyah 7 minutes ago | parent | prev | next [-]

I don't think you could find a single person working for OpenAI that couldn't find employment elsewhere within a month that pays more than enough for food and shelter. This is a ridiculous statement.

pibaker 31 minutes ago | parent | prev | next [-]

Per levels.fyi, median salary of most openAI positions are above 300k. Even "technical writers" have a median pay of 197k. I searched around the internet and it seems like even entry level positions receive well above 150k. Apart from people with severe lifestyle bloat or an unholy number of dependents I doubt too many people working there will face immediate financial difficulties if they quit.

Anyway, it is also amusing to hear tech people defend their right to earn some of the fattest salaries on this planet using the smol bean technique after a decade of "why wouldn't the West Virginian coal miner just learn to code." It was always about maintaining the lifestyle of yearly Japan vacations and MacBook upgrades and never about subsistence.

an hour ago | parent | prev [-]
[deleted]
_heimdall an hour ago | parent | prev | next [-]

Anthropic demanded defining the redlines. OpenAI and others are hiding behind the veil of what is "lawful use" today. They aren't defining their own redlines and are ignoring the executive branch's authority to change what is "lawful" tomorrow.

827a 3 hours ago | parent | prev | next [-]

My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).

bastawhiz an hour ago | parent | prev | next [-]

Altman donated a million to the Trump inauguration fund. Brockman is the largest private maga donor. You don't have to be a rocket scientist to understand what's going on here.

rootusrootus 3 hours ago | parent | prev | next [-]

Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.

moogly 3 hours ago | parent | next [-]

Turns out both companies ran the agreement through their legal departments (Claude and GPT), and one of them did a poor summary. I (think I) jest, but this is probably going to be a thing as more and more companies use LLMs for legal work.

snickerbockers 2 hours ago | parent | prev | next [-]

One nuance I've noticed: the statement from Anthropic specifically stated the use of their products for these purposes was not included in the contract with DoD but it stops short of saying it was prohibited by the contract.

Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.

generic92034 3 hours ago | parent | prev | next [-]

Punish one, teach a hundred (companies).

micromacrofoot 2 hours ago | parent | prev | next [-]

president of openai donated $25 mil to trump last month, openai uses oracle services (larry ellison), kushners have lots invested in openai, altman is pals with peter thiel

yoyohello13 3 hours ago | parent | prev | next [-]

The reasoning is one company is ‘left and woke’ the other gives money to Trump.

Analemma_ 3 hours ago | parent [-]

$25 million to be exact, one of Trump's largest individual donors. From a guy who "doesn't consider himself political", lol. [0]

[0]: https://www.wired.com/story/openai-president-greg-brockman-p...

ycombinary 3 hours ago | parent | prev [-]

[dead]

softwaredoug 2 hours ago | parent | prev | next [-]

The difference is Anthropic wants contractual limitations on usage, explicitly spelling out cases of Mass Surveillance.

OpenAI has more of an understanding that the technology will follow the law.

There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.

The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.

scarmig 2 hours ago | parent [-]

Between Anthropic, the military, and Congress, I have the least faith in Congress to make knowledgeable policy around tech.

amelius 2 hours ago | parent | prev | next [-]

There will be a lawsuit about this.

Analemma_ 3 hours ago | parent | prev | next [-]

It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.

xeonmc 3 hours ago | parent [-]

    > 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor
Ah, so they’ll be applying the good ol’ Three-Fifths Rule[0], a classic.

[0] https://en.wikipedia.org/wiki/Three-fifths_Compromise

slibhb an hour ago | parent | prev [-]

It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs