Remix.run Logo
Imnimo 10 hours ago

I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.

tedsanders 7 hours ago | parent | next [-]

I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.

baconner 6 hours ago | parent | next [-]

Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.

Rebuff5007 4 hours ago | parent | next [-]

> it's very hard to see how anyone could look at what just happened

I think what you are missing is their annual comp with two commas in it.

the_real_cher 3 hours ago | parent | next [-]

This, for that check theyll be building the autonomous robots themselves, saying "theyre food delivery robots, thats not a gun that a drink dispenser!"

cheonn638 2 hours ago | parent [-]

> theyre food delivery robots, thats not a gun that a drink dispenser!"

You underestimate how many top AI scientists are perfectly okay with building autonomous weapons systems and are not ashamed of it.

Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.

Now note that many L7+ at OpenAI are making $10 million+ per year.

the_real_cher 31 minutes ago | parent | next [-]

The world needs a nuclear war to just eliminate 99% of human life and just start over.

mr_mitm 42 minutes ago | parent | prev | next [-]

How many?

tibbydudeza an hour ago | parent | prev [-]

True that - everybody has a price.

2 hours ago | parent | prev | next [-]
[deleted]
lazide an hour ago | parent | prev [-]

Hey, with expected stock payout - tres commas!

Shit, I wonder if I still have any of those ‘tres commas club’ t-shirts lying around?

readitalready 2 hours ago | parent | prev | next [-]

As an OpenAI employee, quitting wouldn't be a problem, as you have a much higher chance of being successful after quitting than anyone else. You could go to any VC and they would fund you.

skepticATX 5 hours ago | parent | prev | next [-]

One explanation is that this is effectively a quid pro quo, given Brockman’s enormous financial support of the current president.

ZeroGravitas 3 hours ago | parent [-]

Yep, theoretically it could just be oligarchic corruption and not institutional insanity at the highest levels of the government. What a reassuring relief it would be to believe that.

monooso 5 hours ago | parent | prev | next [-]

I agree with your assessment, but given the past behaviour of this administration I wouldn't be shocked to discover that the real reason is "petulance".

khazhoux 3 hours ago | parent [-]

It’s obvious retaliation, and will be struck down by the courts.

tedsanders 6 hours ago | parent | prev | next [-]

I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.

az226 4 hours ago | parent | next [-]

Your ballooned unvested equity package is preventing you from seeing the difference between “our offering/deal is better” and “designated supply chain risk and threatening all companies who do business with the government to stop using Anthropic or will be similarly dropped” (which is well past what the designation limits). It’s easier being honest.

tedsanders 3 hours ago | parent [-]

The supply chain risk stuff is bogus. Anthropic is a great, trustworthy company, and no enemy of America. I genuinely root for Anthropic, because its success benefits consumers and all the charities that Anthropic employees have pledged equity toward.

Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.

One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.

slg 3 hours ago | parent | next [-]

>Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.

That isn't what many of us are challenging here. We're not concerned about OpenAI's ethics because they agreed to work with the government after Anthropic was mistreated.

We're skeptical because it seems unlikely that those restrictions were such a third rail for the government that Anthropic got sanctioned for asking for them, but then the government immediately turned around and voluntarily gave those same restrictions to OpenAI. It's just tough to believe the government would concede so much ground on this deal so quickly. It's easier to believe that one company was willing to agree to a deal that the other company wasn't.

throw0101c 38 minutes ago | parent | next [-]

> It's just tough to believe the government would concede so much ground on this deal so quickly.

Well… TACO.

lsaferite an hour ago | parent | prev [-]

Not "asking for them", insisting the already agreed to terms be respected.

intothemild 3 hours ago | parent | prev [-]

We all know who's lying... The guy who's track record is constantly lying.. your boss.

tibbydudeza an hour ago | parent [-]

Ouch but true - he is the Elon of AI.

edoceo 4 hours ago | parent | prev | next [-]

Friend, this reads like that situation where your paycheck prevents you from seeing clearly - I forget the exact quote. Sam doesn't play a straight game and neither does the administration - there are more than a few examples.

komali2 4 hours ago | parent | next [-]

Never try to convince someone of something they're paid to not believe.

davidmr 3 hours ago | parent | prev [-]

Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it”

DavidSJ 4 hours ago | parent | prev [-]

OpenAI should not be agreeing to any contract with DOD under these circumstances of Anthropic being falsely labeled a supply chain risk.

chrisfosterelli 6 hours ago | parent | prev | next [-]

I agree with what you're saying, but given the egos involved in the current admin there's a practical interpretation:

1. Department of War broadly uses Anthropic for general purposes

2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons

3. Anthropic disagrees and it escalates

4. Anthropic goes public criticizing the whole Department of War

5. Trump sees a political reason to make an example of Anthropic and bans them

6. The entirety of the Department of War now has no AI for anything

7. Department of War makes agreement with another organization

If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.

I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.

juggle-anyhow 5 hours ago | parent | next [-]

Well at least we know now that the department of war is less capable than before. All because the big man shit his pants while Anthropic was in view.

pbhjpbhj 41 minutes ago | parent | prev [-]

>5. Trump sees a political reason

Like, they haven't paid me a bribe? That seems to be the only "politics" at play in Trumps head.

DennisP 42 minutes ago | parent | prev | next [-]

And unless GP has a security clearance, they can't know for sure what OpenAI is allowing on classified networks.

JumpCrisscross an hour ago | parent | prev | next [-]

> while another agrees the the same terms that led to that

One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.

spongebobstoes 6 hours ago | parent | prev | next [-]

anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems

openai can deploy safety systems of their own making

from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident

this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model

nawgz 6 hours ago | parent [-]

Source?

manmal 4 hours ago | parent | prev | next [-]

Are you saying that everything so far in this administration has been 100% rational?

willis936 2 hours ago | parent | prev | next [-]

>or there's another reason for the loud attempt to blacklist Anthropic

This one is very easy. Trump has a well established pattern of making a loud statement to make it appear he didn't lose, even when he did.

cowsandmilk 3 hours ago | parent | prev | next [-]

> one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that

Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.

az226 4 hours ago | parent | prev | next [-]

And Sam is a habitual liar.

jdiaz97 3 hours ago | parent | next [-]

He literally just got community noted for lying. So much for a non-profit CEO or whatever it is now.

kotaKat 2 hours ago | parent | prev [-]

And an abuser, but they keep covering that one up.

ukblewis 30 minutes ago | parent | prev [-]

They aren’t the same terms. You are clearly an enemy bot or an uneducated fool. OpenAI has agreed to mass surveillance of those who are not Americans. Anthropic refused. OpenAI’s term was a restriction of surveillance not to be on Americans

tfehring 5 hours ago | parent | prev | next [-]

(Disclosure, I'm a former OpenAI employee and current shareholder.)

I have two qualms with this deal.

First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.

Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.

Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.

[0] https://x.com/sama/status/2027578652477821175

[1] https://x.com/UnderSecretaryF/status/2027594072811098230

syllogism an hour ago | parent | next [-]

I don't understand how any sort of deal is defensible in the circumstances.

Government: "Anthropic, let us do whatever we want"

Anthropic: "We have some minimal conditions."

Government: "OpenAI, if we blast Anthropic into the sun, what sort of deal can we get?"

OpenAI: "Uh well I guess I should ask for those conditions"

Government: blasts Anthropic into the sun "Sure whatever, those conditions are okay...for now."

By taking the deal with the DoW, OpenAI accepts that they can be treated the same way the government just treated Anthropic. Does it really matter what they've agreed?

spondyl 4 hours ago | parent | prev [-]

Jeremy Lewin's tweet referenced that "all lawful use" is the particular term that seems to be a particular sticking point.

While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical.

Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1]

[0] https://en.wikipedia.org/wiki/Third-party_doctrine

[1] https://www.penguinrandomhouse.com/books/706321/means-of-con...

az226 4 hours ago | parent [-]

The term lawful use is a joke to the current administration when they go after senators for sedition when reminding government employees to not carry out unlawful orders. It’s all so twisted.

ChadNauseam 6 hours ago | parent | prev | next [-]

Did Sam Altman say that he wouldn't allow ChatGPT to be used for fully autonomous weapons? (Not quite the same as "human responsibility for use of force".)

I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.

But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.

scarmig 6 hours ago | parent | next [-]

> you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons"

To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.)

ChadNauseam 3 hours ago | parent | next [-]

They specifically said they never agreed to let the DoD use anthropic for fully autonomous weapons. They said "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance [...] Fully autonomous weapons"

Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today.

gizzlon 2 hours ago | parent | prev [-]

> it isn't a moral stance so much as a pragmatic one

Agreed, the moral stance is saying no to DoJ and the US government

khalic 4 hours ago | parent | prev | next [-]

You're not overanalyzing anything, you're using critical thinking dissecting company communications. Kudos

Barbing 6 hours ago | parent | prev [-]

Does he do employee town halls where they could ask?

throwawaywd89e 6 hours ago | parent | prev | next [-]

"AI shouldn't be used for mass surveillance or autonomous weapons". The statement from OpenAI virtually guarantees that the intention is to use it for mass surveillance and autonomous weapons. If this wasn't the intention them the qualifier "domestic" wouldn't be used, and they would be talking about "human in the loop" control of autonomous weapons, not "human responsibility" which just means there's someone willing to stand up and say, "yep I take responsibility for the autonomous weapon systems actions", which lets be honest is the thinnest of thin safety guarantees.

4b11b4 17 minutes ago | parent | prev | next [-]

lol, naive as hell. why would your company's agreement be the same as the one who just refused the _same_ agreement? Even my question doesn't even make sense, this is a contradiction, therefore your statement must be false. There, it's proven

pear01 7 hours ago | parent | prev | next [-]

Why would you believe that? If that were the case what was the issue with Anthropic even about?

You, and your colleagues, should resign.

thunky 29 minutes ago | parent | next [-]

> You, and your colleagues, should resign.

It would be better if everyone stopped doing business with OpenAI so these employees lose their stock value.

But of course neither of these things will happen.

permo-w 5 hours ago | parent | prev | next [-]

You tell me why an employee would believe something convenient to them continuing to receive their paycheck

gizzlon 2 hours ago | parent [-]

Life is more than a paycheck. We should raise the bar a little IMO. Turning down money for good reasons is not something extreme we should only expect from saints.

komali2 4 hours ago | parent | prev | next [-]

Imo the more ethical thing is obstructionism. Twitter's takeover showed it's pretty easy to find True Believer sycophants to hire. Better to play the part while secretly finding ways to sabotage.

booleandilemma 2 hours ago | parent | prev [-]

That quote comes to mind...It is difficult to get a man to understand something when his salary depends upon his not understanding it.

Obviously nothing is going to make Teddy quit his cushy OpenAI job.

mattalex 5 hours ago | parent | prev | next [-]

Assuming this is real: Why do you think anthropic was put on what is essentially an "enemy of the state" list and openai didn't?

The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.

It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.

_heimdall an hour ago | parent | prev | next [-]

My understanding is that OpenAI's deal, and the deal others are signing, implicitly prevents the use of LLMs for mass domestic surveillance and fully autonomous weapons because today one care argue those aren't legal and the deal is a blanket for allowing all lawful use.

Today it can't be used for mass surveillance, but the executive branch has all the authority it needs to later deem that lawful if it wishes to, the Patriot Act and others see to that.

Anthropic was making the limits contractually explicit, meaning the executive branch could change the line of lawfulness and still couldn't use Anthropic models for mass surveillance. That is where they got into a fight and that is where OpenAI and others can claim today that they still got the same agreement Anthropic wanted.

assimpleaspossi an hour ago | parent | prev | next [-]

How would OpenAI respond to China or Russia using OpenAI--or any AI--for mass surveillance or autonomous weapons?

exizt88 37 minutes ago | parent | prev | next [-]

https://en.wikipedia.org/wiki/Bad_faith_(existentialism)

scarmig 6 hours ago | parent | prev | next [-]

Why do you suppose OpenAI's deal led to a contract, while Anthropic's deal (ostensibly containing identical terms) gets it not only booted but declared a supply chain risk?

ryan_n an hour ago | parent | prev | next [-]

For the record I don’t care if you quit or not. Cash rules after all… However, you are incredibly naive if you think the current admin will follow through on those terms.

phs318u 6 hours ago | parent | prev | next [-]

Thank you for responding. Everyone wants to think they will “do the right thing” when their own personal Rubicon is challenged. In practice, so many factors are at play, not least of which are the other people you may be responsible for. The calculus of balancing those differing imperatives is only straightforward for those that have never faced this squarely. I’ve been marched out of jobs twice for standing up for what I believed to be right at the time. Am still literally blacklisted (much to the surprise of various recruiters) at a major bank here 8 years after the fact. I can’t imagine that the threat of being blacklisted from a whole raft of companies contracting with a known vindictive regime would make the decision easier.

latexr 5 hours ago | parent | prev | next [-]

> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons

And you believe the US government, let alone the current one will respect that? Why? Is it naïveté or do you support the current regime?

> If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit.

So your logic is your company is selling harmful technology to a bunch of known liars who are threatening to invade democratic countries, but because they haven’t lied yet in this case (for lack of opportunity), you’ll wait until the harm is done and then maybe quit?

I’ll go out on a limb and say you won’t. You seem to be trying really hard to justify to yourself what’s happening so you can sleep at night.

Know that when things go wrong (not if, when), the blood will be on your hands too.

virtualritz 3 hours ago | parent | prev | next [-]

Giving you the benefit of the doubt and assuming [1] does not play a role in your thinking:

I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.

[1] https://news.ycombinator.com/item?id=47189650#47189970

syllogism 3 hours ago | parent | prev | next [-]

You should quit because the only reasonable thing for your leadership to have done is to refuse to sign any agreement with DoW whatsoever while it's attempting to strongarm Anthropic in this fashion.

It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.

If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.

It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?

What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?

segmondy 5 hours ago | parent | prev | next [-]

You can't be this naive?

Griffinsauce 5 hours ago | parent | prev | next [-]

Aside from that unlikely read, this deal was still used as a pressure point on Anthropic, there's absolutely no way OpenAI was not used as a stick to hit with during negotiations.

What is your red line?

mda 3 hours ago | parent | prev | next [-]

I can totally see why you should quit, but we see different things apparently.

trvz 6 hours ago | parent | prev | next [-]

You may have missed that no single word said or written by any of the current US government’s members can be believed.

curiousgal 4 hours ago | parent | prev | next [-]

This is not meant as a personal attack but this has got to be the most naive thing I've read.

nullocator 5 hours ago | parent | prev | next [-]

I don't know you, so maybe you're actually for real and speaking on good faith here but honestly this and your other responses in this thread read exactly like "...salary depends on not understanding"

Nekorosu 3 hours ago | parent | prev | next [-]

I won't trust a word coming from Sam Altman's mouth until I see official signed documents (which I won't).

johnbellone 2 hours ago | parent [-]

You should’ve stopped at don’t trust a word out of his mouth.

3 hours ago | parent | prev | next [-]
[deleted]
bambax 4 hours ago | parent | prev | next [-]

"It is difficult to get a man to understand something, when his salary depends on his not understanding it."

sensanaty 2 hours ago | parent | prev | next [-]

Assuming this isn't a troll and you really think this, you should at least have the cojones to admit you're taking the blood money instead of trying to pretzel the truth so hard that you just look like a moron instead.

kaashif 6 hours ago | parent | prev | next [-]

Anthropic is deemed a betrayer and a supply chain risk for actually enforcing their principles.

OpenAI agrees to be put in the same position as Anthropic.

It seems like you must actually somehow believe that history will repeat itself, Hegseth will deem OpenAI a supply chain risk too, then move to Grok or something?

There's surely no way that's actually what you believe...

vimda 4 hours ago | parent | prev | next [-]

"domestic" "mass" surveillance, two words that can be stretched so thin they basically invalidate the whole term. Mass surveillance on other countries? Guess that's fine. Surveillance on just a couple of cities that happen to be resisting the regime? Well, it's not _mass_ surveillance, just a couple of cities!

q3k 4 hours ago | parent | prev | next [-]

Coward.

jakeydus 4 hours ago | parent [-]

Sometimes brevity is the heart of wit or whatever the line is.

tibbydudeza an hour ago | parent | prev | next [-]

At the next town hall ask them directly - you making assumptions here.

mmanfrin 5 hours ago | parent | prev | next [-]

You can make blood money but you have to be aware it's blood money. Don't delude yourself in to thinking you work for an ethical or moral company.

4 hours ago | parent | prev | next [-]
[deleted]
5 hours ago | parent | prev | next [-]
[deleted]
mathisfun123 7 hours ago | parent | prev | next [-]

> Given this understanding, I don't see why I should quit.

https://en.wikipedia.org/wiki/Motivated_reasoning

retornam 6 hours ago | parent | prev | next [-]

I have a bridge to Brooklyn to sell you if you believe this.

Standing up for whats right often is not easy and involves hard choices and consequences, your leader has shown you and the world that he is not to be trusted.

I can't tell you what to do but I hope you make the right decision.

cyanydeez 3 hours ago | parent | prev | next [-]

Right beautifying lies are always going to head in the direction of doing whats self interested.

leptons 3 hours ago | parent | prev | next [-]

>OpenAI deal disallows domestic mass surveillance

And the US Military is forbidden from operating on US soil, but that didn't stop this administration from deploying US Marines to California recently.

You're fooling yourself if you think this administration is following any kind of rule.

6 hours ago | parent | prev | next [-]
[deleted]
an hour ago | parent | prev | next [-]
[deleted]
matkoniecz an hour ago | parent | prev | next [-]

Can you at least stop lying to yourself? Given what they did with Anthropic for not supporting domestic mass surveillance and autonomous weapons...

> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons

Your understanding is entirely wrong. At least stop lying to yourself and admit that you are entirely fine with working on evil things if you are paid enough.

wanderlust123 an hour ago | parent | prev | next [-]

So its ok as long as its not domestic. Got it

wjekkekene 2 hours ago | parent | prev | next [-]

What a joke

make3 4 hours ago | parent | prev | next [-]

insane cope

popalchemist 5 hours ago | parent | prev | next [-]

Why would you trust anything out of Sam's mouth? He's a sociopath. Is that lost on you?

vultour 2 hours ago | parent [-]

The comment perfectly exemplifies the kind of person that would work at OpenAI. Government AI drones could be executing citizens in the streets but they’d still find some sort of cope why it’s not a problem. They’ll keep moving the goalposts as long as the money keeps coming.

jdiaz97 3 hours ago | parent | prev [-]

Scam Altman already got community noted btw

tempaccount420 9 hours ago | parent | prev | next [-]

Didn't the safety-conscious employees already leave when OpenAI fired Sam Altman and then re-hired him?

In my mind the only people left are those who are there for the stocks.

AbstractH24 9 hours ago | parent | next [-]

In all seriousness, what’s the average tenure at OpenAI and how much of the company in March 2026 was even around for that?

lioeters 7 hours ago | parent [-]

It's comforting to know that some of the brightest minds of our generation are going to work at OpenAI, then quitting a few months later horrified, only to post a short mysterious tweet warning everyone of the dangers ahead. So much for alignment and serving humanity.

stingraycharles 7 hours ago | parent [-]

And they will continue to work for Google / Meta et al to use novel AI techniques to sell us more and better ads, only to quit a few years later to do more soul searching where everything went wrong /s

bobanrocky 7 hours ago | parent | prev | next [-]

And h1 slaves

DANmode 8 hours ago | parent | prev [-]

Review the signers https://notdivided.org

pluc 23 minutes ago | parent [-]

They've been deleted. For obvious reasons. You want to take a stand but you don't want to stop working for the people who do the things you don't want to do. It's all so very american. I'll put my name on but if it doesn't work remove my name so I don't get into trouble ok? Home of the brave.

arugulum 9 hours ago | parent | prev | next [-]

> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.

But they did.

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

layer8 8 hours ago | parent | next [-]

The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that. In both cases, the two parties can claim to agree on the principles, but when push comes to shove, who decides on whether the principles are violated differs.

remarkEon 7 hours ago | parent | next [-]

Seems Anthropic did not understand the questions they were asked. From the WaPo:

>A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

>It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.

>An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.

I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.

https://web.archive.org/web/20260227182412/https://www.washi...

retsibsi 8 minutes ago | parent | next [-]

Is there any reason at all to believe the account of the unnamed "defence official"? Whatever your position on this administration, you know that it lies like the rest of us breathe. With a denial from the other side and a lack of any actual evidence, why should I give it non-negligible credence?

lukan 7 hours ago | parent | prev | next [-]

"It’s the kind of situation where technological might and speed could be critical to detection and counterstrike"

Missile detection and decision to make a (nuclear) counterstrike are 2 different things to me but apparently the department of war wants both, so it seems not "just" about missile detection.

wraptile 24 minutes ago | parent | prev | next [-]

> If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

I'm sorry but lol

quaunaut 7 hours ago | parent | prev [-]

Are you serious? This is the kind of thing you'd ask a clarifying question on and get information back immediately. Further, the huge overreaction from Hegseth shows this is a fundamental disagreement.

SpicyLemonZest 6 hours ago | parent [-]

The flip side of "Hegseth is an unqualified drunk", a position which I've always held and still maintain, is that he very well might crash out over nothing instead of asking clarifying questions or suggesting obvious compromises. This is the same guy who recalled the entire general staff to yell at them about the warrior mindset. Not an excuse for any of this, but I do think the precise nature of the badness matters.

pseudalopex 8 hours ago | parent | prev | next [-]

> The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that.

You learned this where?

layer8 7 hours ago | parent [-]

I’m reading between the lines of the involved parties’ various statements, but there’s also this: https://x.com/UnderSecretaryF/status/2027594072811098230

pseudalopex 7 hours ago | parent | next [-]

> I’m reading between the lines of the involved parties’ various statements

You should have said this.

> https://x.com/UnderSecretaryF/status/2027594072811098230

Thank you.

layer8 7 hours ago | parent [-]

It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those. And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.

nandomrumber 7 hours ago | parent | next [-]

From the referenced tweet;

who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.

Amodei is the type of person who thinks he can tell the US government what they can and can’t do.

And the US government should have precisely none of that, regardless of whether they’re red or blue.

eecc 5 hours ago | parent | next [-]

And that’s where the authoritarian in you is shining through.

You see, Obama droned more combatants than anyone else before or after him but always followed a legal paper trail and following the book (except perhaps in some cases, search for Anwar al-Awlaki).

One can argue whether the rules and laws (secret courts, proceedings, asymmetries in court processes that severely compress civil liberties… to the point they might violate other constitutional rights) are legitimate, but he operated within the limits of the law.

You folks just blurt “me ne frego” like a random Mussolini and think you’re being patriotic.

SMH

nullocator 6 hours ago | parent | prev [-]

> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.

> And the US government should have precisely none of that, regardless of whether they’re red or blue.

This is a pretty hot take. "You can't break the law and kill people or do mass surveillance with our technology." fuck that, the government should break whatever laws and kill whoever they please

I hope you A: aren't a U.S. citizen, and B: don't vote.

If I'm selling widgets to the government and come to find out they are using those widgets unconstitutionally and to violate my neighbors rights you can be damn sure I'm going to stop selling the gov my widgets. Amodei said that Anthropic was willing to step away if they and the government couldn't come to terms, and instead of the government acting like adults and letting them they decided to double down on being the dumbest people in the room and act like toddlers and throw a massive fit about the whole thing.

pseudalopex 7 hours ago | parent | prev [-]

> It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those.

No. Altman said human responsibility. Anthropic said human in the loop.

> And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.

All but confirmed was not confirmed.

layer8 6 hours ago | parent [-]

I don’t understand your first comment. At that point, Altman’s tweet didn’t exist yet, and is immaterial to the reading of Anthropic’s and Hegseth’s statements.

To your second comment, it was clear enough to me to be the most plausible reading of the situation by far.

We state what we think the situation is all the time, without explicitly writing “I think the situation is…”.

intermerda 6 hours ago | parent | prev [-]

[dead]

outside1234 8 hours ago | parent | prev [-]

This. Sam is going to pretend they aren’t going to use it for that because his company is collapsing in losses. He will never audit.

Probably also got assurances about a bailout when OpenAI collapses.

WD-42 8 hours ago | parent | prev | next [-]

I'm sure it's a matter of interpretation. Anthropic thinks the DoW's demands will lead to mass surveillance and auto-kill bots. The DoW probably disagrees with that interpretation, and all OpenAI needs to do is agree with the DoW.

My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.

PaulDavisThe1st 7 hours ago | parent | next [-]

Why do you choose to call it the "DoW"? Its official name is the Department of Defense, it was titled that way by Congress and only Congress can change it. What is your motivation in using a term that the current administration has started to use? Do you also use the Gulf of America when referrring to the body of water that defines the southern edge of the USA?

thejazzman 6 hours ago | parent | next [-]

Don't you think it is more to-the-point to call it what it is and what the people running it with, i'll bet everything i have, absolute immunity, are doing and intend to do with it?

It's like the one honest thing they've done

matsemann 6 hours ago | parent | prev | next [-]

It's the term used by Sam Altman in the announcement. Maybe aim your anger there, to someone knowingly helping them in their attempt to turn the department into one of aggression.

charcircuit 4 hours ago | parent | prev | next [-]

The president changed it back to its original name with an executive order. The administration did not just start spontaneously using it.

IsTom 3 hours ago | parent | prev | next [-]

The only more fitting name currently would be Department of Peace

j_maffe 4 hours ago | parent | prev | next [-]

If someone is calling themselves a warmonger, they should be called a warmonger.

calgoo 3 hours ago | parent | prev [-]

Exactly this! Just like the Gulf of Mexico is still called the Gulf of Mexico, if we just ignore his ramblings and continue calling the department of defense, we undermine his whole point. If we fall for all their crap and just accept it, then we loose in the end. Any resistance to a Fascist government is good resistance. Anything that makes their life's a little shittier is good. Better that they go around having tantrums about how they renamed it but no one is paying attention.

IsTom 3 hours ago | parent | prev | next [-]

> The DoW probably disagrees with that interpretation

Or perhaps, maybe, just a little maybe, DoW is getting absolutely excited about mass surveillance and kill-bots?

tombert 7 hours ago | parent | prev [-]

Not that this will matter on any individual level, but I canceled my ChatGPT subscription after this.

I didn't have much of an opinion of Altman before but now I think he's a grifting douche.

khalic 4 hours ago | parent | prev | next [-]

Anthropic has safeguards baked in the model, this is the only way to make sur it's harder for the DOJ to misuse it. A pinky swear from the DoD means nothing

propagandist 8 hours ago | parent | prev | next [-]

Human responsibility is not the same as human decision making.

And they are crossing the picket line, which honestly I was sure they would do, though I did expect it to take a bit longer.

This is too transparent even for sama.

nick486 7 hours ago | parent [-]

>Human responsibility is not the same as human decision making.

this is going to end up being interpreted as "well, the president signed off on the operation. see - there's a human in the loop!" - is it?

propagandist 7 hours ago | parent [-]

That's precisely how I read it. They're weasel words delivered by the master weasel himself.

7 hours ago | parent | prev | next [-]
[deleted]
newguytony 8 hours ago | parent | prev | next [-]

Good ole Sammy has never lied

arugulum 8 hours ago | parent [-]

If your starting position is already that Sam Altman lies about everything that doesn't fit your preconceived positions, that doesn't seem like a very useful meaningful position to update.

lioeters 7 hours ago | parent | next [-]

The company started with a lie, it's in the name.

johnbellone 2 hours ago | parent | prev [-]

Yes.

7 hours ago | parent | prev | next [-]
[deleted]
fooker 8 hours ago | parent | prev | next [-]

Unrelated, but want to buy a bridge?

You could recoup your investment in a year by collecting toll. Expedited financing available on good credit!

tomhow 8 hours ago | parent [-]

Please don’t do this here.

adampunk 8 hours ago | parent | prev [-]

[flagged]

pseudalopex 8 hours ago | parent [-]

https://news.ycombinator.com/item?id=47190644

2snakes 7 hours ago | parent | prev | next [-]

I think it is like a loyalty test to an authority above the law (executive immunity) in order to do business. “If we tell you to do so, you may do something you thought was right or wrong.” It is like an induction into a faction and the way the decisions could be made. Doesn’t necessarily mean anything about “in practice in the future”, just that the cybernetic override is there tacitly. If the authority thinks they can get away with something, they will provide protection for consequences too. Some people more equal than others when it comes to justice for all, etc. There are probably alternative styles for group decision making…

6 hours ago | parent [-]
[deleted]
pluc 23 minutes ago | parent | prev | next [-]

Easy: have no principles that money can't buy. That's the American Dream!

weatherlite 6 hours ago | parent | prev | next [-]

> I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this

Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.

latexr 4 hours ago | parent [-]

> some I suppose will stay in their jobs because leaving isn't an easy decision to make.

Anyone who chooses to stay shouldn’t have signed the letter. What’s the point of doing it if you’re not going to follow through? If you signed the letter and don’t leave after the demands aren’t met, you’re a liar and a coward and are actively harming every signatory of every future letter.

cheonn638 3 hours ago | parent [-]

[dead]

miohtama 4 hours ago | parent | prev | next [-]

OpenAI is already doing mass surveillance, so nothing changes

https://www.theguardian.com/world/2026/feb/21/tumbler-ridge-...

4ndrewl 2 hours ago | parent | prev | next [-]

This is not a turning point. This is the destination. Were you onboard the wrong train?

vander_elst 2 hours ago | parent | prev | next [-]

> I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment [...]

Sometimes money is more attractive than morality. So I guess money is the answer here.

ivan_gammel 5 hours ago | parent | prev | next [-]

Another plausible explanation that is familiar to a lot of people in other countries is banal corruption. Kick out one competitor on bogus allegations, then on the next day invite another one… what else that could be?

coliveira 9 hours ago | parent | prev | next [-]

Yes, what is implied in this episode is that all big companies that do AI development or provide computing for Ai are now signing for these very shady uses of their technologies.

KellyCriterion 2 hours ago | parent | prev | next [-]

The ones who signed are not the same as the ones who didnt sign and continue to work there, Id guess?

granzymes 9 hours ago | parent | prev | next [-]

>Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.

Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.

unethical_ban 7 hours ago | parent | next [-]

While that thought crossed my mind, someone in a sub thread of parent comment made a point: OpenAI made a statement about how "We insisted this be not be used in those ways and DoD totally says they won't". Which sounds to me like they ceded any hard terms oand conditions and are letting the DoD use it in "any lawful means" which is what Anthropic didn't stand for.

davidw 9 hours ago | parent | prev [-]

They seem moderately competent at doing blatant corruption ( https://coinmarketcap.com/currencies/official-trump/ , Qatari jet, etc...). See jeffbee's comment below.

garyclarke27 an hour ago | parent | prev | next [-]

I would not discount how much of a factor, irrational human emotions play in negotiations. Dario is arrogant and pompous so probably wound Hegseth up the wrong way. Sam is much more charming and amenable so more able to get his way despite similar terms.

chpatrick 3 hours ago | parent | prev | next [-]

"It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair

hirvi74 7 hours ago | parent | prev | next [-]

> The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.

Do you mean the same OpenAI that has a retired U.S. Army General & former director of the NSA (Gen. Nakasone) serving on its board of directors?

shevy-java 2 hours ago | parent | prev | next [-]

Makes sense.

lazide an hour ago | parent | prev | next [-]

I few will leave. Most will look nervously at their (non-public) stock and their bank accounts, and continuing keeping on.

the_real_cher 3 hours ago | parent | prev | next [-]

Have you seen the size of OpenAi employees comp?

Woolad theyll create the autonomous military robots themselves for that check.

no_wizard 3 hours ago | parent | prev | next [-]

For all I know Sam Altman orchestrated this via well timed donations and whatever the hell contacts he has in government, Trump specifically seems to have taken the man

So using Anthropic’s own words to cover a power play or pulling relationships to see if they could get anthropic to balk at it.

outside1234 8 hours ago | parent | prev | next [-]

All of us can act too. Stop using the OpenAI models. Stop using the app. Design in other models no matter what. Screw these guys.

foo12bar 7 hours ago | parent [-]

Do you expect that to work?

calgoo 3 hours ago | parent | next [-]

Its about network effect - The biggest issue is that ChatGPT is a household name like Google at this point. Everyone and their grandma knows it or are learning about it, while Claude is very well known in the tech circles. Getting tech people to switch is relativity easy (ignoring Enterprise contracts), but getting everyone else to switch is going to be very slow.

Honestly, the best thing to happen is that someone comes up with a new UI (think claw...like) that everyone starts using instead. A very cute, well integrated system that just works for everyone, has free tier, and has something that the others dont have.

throw0101c 25 minutes ago | parent | prev | next [-]

>> All of us can act too. Stop using the OpenAI models. Stop using the app. Design in other models no matter what. Screw these guys.

> Do you expect that to work?

Many years ago Tim O'Reilly (of book publishing fame) knew Apple would one day would become really big even though they were a small, niche player in the "PC" space as the time (2000s). How did he know that? By seeing what the 'alpha geeks' were doing: the folks that not just used tech, but were working at companies that were inventing the future. They were the ones where friends and families asked for advice. And the alpha geeks (at the time) were switch to MacOS X and telling their friends and family about it.

* https://www.oreilly.com/tim/archives/rationaledge_interview....

* https://www.wired.com/2006/05/tim-says-watch-alpha-geeks/

There's a good chance that if you're on HN, you're the person in your non-techies social group that many others ask for advice. You can potentially sway many people by your example and your advice.

PaulDavisThe1st 7 hours ago | parent | prev | next [-]

No, I expect you to die, Mr. Bond.

komali2 4 hours ago | parent | prev [-]

It's a commoditized market so it doesn't hurt to try.

vineyardmike 9 hours ago | parent | prev [-]

Nah. It's possible that the agreement still supports the required terms.

There is more to this story behind the scenes. The government wanted to show power and control over our companies and industries. They didn’t need those terms for any specific utility, they wanted to fight “woke” business that stood up to them.

Supposedly OpenAI had the same terms as Anthropic (according to SamA). Maybe they offered it cheaper and that’s why they agreed. Maybe it’s all the lobbying money from OpenAI that let the government look the other way. Maybe it’s all the PR announcements SamA and Trump do together.

sigmar 9 hours ago | parent | next [-]

>Supposedly OpenAI had the same terms

"we put them into our agreement." is strange framing is Altman's tweet. Makes me think the agreement does mention the principles, but doesn't state them as binding rules DoD must follow.

Imnimo 9 hours ago | parent | prev | next [-]

None of those explanations are compatible with the pledge of solidarity in the We Will Not Be Divided letter.

harmonic18374 9 hours ago | parent | prev | next [-]

I prescribe literally zero truth value to what Sam says. He will say whatever he needs to get ahead. It is honestly irritating to me that you and many others here seem to implicitly assume his messages are correlated with truth, doing his social engineering work for him, as if his word should adjust your priors even slightly.

I don't necessarily think he's lying, but there's so much obvious incentive for him to lie here (if only because his employees can save face).

chamomeal 9 hours ago | parent | next [-]

Your comment reminded me that a blog post. It’s by the same guy that wrote “programming sucks”. I’ve been sharing it a lot recently lol

https://www.stilldrinking.org/stop-talking-to-technology-exe...

dataflow 9 hours ago | parent | prev | next [-]

> I don't necessarily think he's lying

He doesn't even need to be lying, the comment is vague and contains enough loopholes that it could be true yet meaningless. I explained some that I noticed here: https://news.ycombinator.com/item?id=47190163

sesqu 7 hours ago | parent | prev [-]

I'm not sure if I'd go down to zero, but he did get fired from OpenAI for lying.

harmonic18374 7 hours ago | parent [-]

And fired from YC for lying. And lied to investors about how many Loopt employees he had. And lied about having 100x the actual number of users when he sold it. And lied to employees about the Microsoft deal. And lied to his safety team.

pseudalopex 7 hours ago | parent | prev | next [-]

> Supposedly OpenAI had the same terms as Anthropic (according to SamA).

He said human responsibility. Anthropic said human in the loop.

And Anthropic refused to say any lawful purpose would be allowed reportedly.

jeffbee 9 hours ago | parent | prev | next [-]

It's this simple: Trump is a criminal. Larry Ellison is his pal. Sam Altman has a huge deal for cloud services from Oracle. Trump is using the DoD budget to backstop Ellison's business.

coliveira 9 hours ago | parent | next [-]

This is pretty much on the right take on it, although it's much more than that. It's very clear at this point, especially the first conclusion, but people insist in looking to the other side.

drivebyhooting 9 hours ago | parent | prev [-]

Interesting thesis.

But regardless of the moral implications, will this improve America’s position on the global stage or further undermine it?

coliveira 9 hours ago | parent | next [-]

Only if you think that crime will somehow improve America. My opinion is that this is leading to its collapse, no matter how "powerful" they look.

MaxfordAndSons 9 hours ago | parent | prev | next [-]

Attempting to kneecap the breakout front runner of the major American AI companies to ensure the shittier, politically compliant one wins in the short term? Gee I wonder.

drivebyhooting 9 hours ago | parent [-]

Anthropic is great but not the undisputed front runner.

I can also interpret this as Sam and the administration supporting accelerationism while Dario is more measured and wishes to slow things down.

SpicyLemonZest 9 hours ago | parent | prev | next [-]

For better or worse, outright nationalization of military related companies is common on a global scale. I plan to do my best to ensure this is a domestic catastrophe, and I hope we'll succeed, but I don't expect other countries to care much about varying levels of regime alignment between two billionaire American defense contractors.

intermerda 6 hours ago | parent | prev [-]

[dead]

SpicyLemonZest 9 hours ago | parent | prev [-]

Maybe Sam Altman said nicer things about Donald Trump. Maybe he promised that he would not revoke their API keys when Hegseth directs the military to seize ballots. Maybe he's jockeying for position to take over the government when AGI hits.

Ultimately, I don't know how much the specific reasons matter. Pete Hegseth must be removed from office, OpenAI must be destroyed for their betrayal of the US public, that's all there is to it.

toufka 9 hours ago | parent [-]

1) Another OpenAI cofounder (Brockman) gave Trump’s superPAC the largest ever individual donation of $25m.

2) Trump’s son in law (Kushner) has most of his net worth wrapped up in OpenAI.

m_ke 9 hours ago | parent | next [-]

don't forget that Sama is a Thiel protege

paganel 4 hours ago | parent | prev [-]

> Trump’s son in law (Kushner) has most of his net worth wrapped up in OpenAI.

If true (too lazy to check but I honestly take your word for it), this should probably be bigger news. Not that the outright corruption when it comes to the highest position in the US Government constitutes news anymore, but because it puts the Government’s fight against Anthropic (and supposedly other potential OpenAI competitors) in a new light.