Remix.run Logo
baconner 6 hours ago

Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.

Rebuff5007 4 hours ago | parent | next [-]

> it's very hard to see how anyone could look at what just happened

I think what you are missing is their annual comp with two commas in it.

the_real_cher 3 hours ago | parent | next [-]

This, for that check theyll be building the autonomous robots themselves, saying "theyre food delivery robots, thats not a gun that a drink dispenser!"

cheonn638 2 hours ago | parent [-]

> theyre food delivery robots, thats not a gun that a drink dispenser!"

You underestimate how many top AI scientists are perfectly okay with building autonomous weapons systems and are not ashamed of it.

Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.

Now note that many L7+ at OpenAI are making $10 million+ per year.

the_real_cher 31 minutes ago | parent | next [-]

The world needs a nuclear war to just eliminate 99% of human life and just start over.

mr_mitm 42 minutes ago | parent | prev | next [-]

How many?

tibbydudeza an hour ago | parent | prev [-]

True that - everybody has a price.

2 hours ago | parent | prev | next [-]
[deleted]
lazide an hour ago | parent | prev [-]

Hey, with expected stock payout - tres commas!

Shit, I wonder if I still have any of those ‘tres commas club’ t-shirts lying around?

readitalready 2 hours ago | parent | prev | next [-]

As an OpenAI employee, quitting wouldn't be a problem, as you have a much higher chance of being successful after quitting than anyone else. You could go to any VC and they would fund you.

skepticATX 5 hours ago | parent | prev | next [-]

One explanation is that this is effectively a quid pro quo, given Brockman’s enormous financial support of the current president.

ZeroGravitas 3 hours ago | parent [-]

Yep, theoretically it could just be oligarchic corruption and not institutional insanity at the highest levels of the government. What a reassuring relief it would be to believe that.

monooso 5 hours ago | parent | prev | next [-]

I agree with your assessment, but given the past behaviour of this administration I wouldn't be shocked to discover that the real reason is "petulance".

khazhoux 3 hours ago | parent [-]

It’s obvious retaliation, and will be struck down by the courts.

tedsanders 6 hours ago | parent | prev | next [-]

I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.

az226 4 hours ago | parent | next [-]

Your ballooned unvested equity package is preventing you from seeing the difference between “our offering/deal is better” and “designated supply chain risk and threatening all companies who do business with the government to stop using Anthropic or will be similarly dropped” (which is well past what the designation limits). It’s easier being honest.

tedsanders 3 hours ago | parent [-]

The supply chain risk stuff is bogus. Anthropic is a great, trustworthy company, and no enemy of America. I genuinely root for Anthropic, because its success benefits consumers and all the charities that Anthropic employees have pledged equity toward.

Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.

One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.

slg 3 hours ago | parent | next [-]

>Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.

That isn't what many of us are challenging here. We're not concerned about OpenAI's ethics because they agreed to work with the government after Anthropic was mistreated.

We're skeptical because it seems unlikely that those restrictions were such a third rail for the government that Anthropic got sanctioned for asking for them, but then the government immediately turned around and voluntarily gave those same restrictions to OpenAI. It's just tough to believe the government would concede so much ground on this deal so quickly. It's easier to believe that one company was willing to agree to a deal that the other company wasn't.

throw0101c 38 minutes ago | parent | next [-]

> It's just tough to believe the government would concede so much ground on this deal so quickly.

Well… TACO.

lsaferite an hour ago | parent | prev [-]

Not "asking for them", insisting the already agreed to terms be respected.

intothemild 3 hours ago | parent | prev [-]

We all know who's lying... The guy who's track record is constantly lying.. your boss.

tibbydudeza an hour ago | parent [-]

Ouch but true - he is the Elon of AI.

edoceo 4 hours ago | parent | prev | next [-]

Friend, this reads like that situation where your paycheck prevents you from seeing clearly - I forget the exact quote. Sam doesn't play a straight game and neither does the administration - there are more than a few examples.

komali2 4 hours ago | parent | next [-]

Never try to convince someone of something they're paid to not believe.

davidmr 3 hours ago | parent | prev [-]

Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it”

DavidSJ 4 hours ago | parent | prev [-]

OpenAI should not be agreeing to any contract with DOD under these circumstances of Anthropic being falsely labeled a supply chain risk.

chrisfosterelli 6 hours ago | parent | prev | next [-]

I agree with what you're saying, but given the egos involved in the current admin there's a practical interpretation:

1. Department of War broadly uses Anthropic for general purposes

2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons

3. Anthropic disagrees and it escalates

4. Anthropic goes public criticizing the whole Department of War

5. Trump sees a political reason to make an example of Anthropic and bans them

6. The entirety of the Department of War now has no AI for anything

7. Department of War makes agreement with another organization

If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.

I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.

juggle-anyhow 5 hours ago | parent | next [-]

Well at least we know now that the department of war is less capable than before. All because the big man shit his pants while Anthropic was in view.

pbhjpbhj 41 minutes ago | parent | prev [-]

>5. Trump sees a political reason

Like, they haven't paid me a bribe? That seems to be the only "politics" at play in Trumps head.

DennisP 42 minutes ago | parent | prev | next [-]

And unless GP has a security clearance, they can't know for sure what OpenAI is allowing on classified networks.

JumpCrisscross an hour ago | parent | prev | next [-]

> while another agrees the the same terms that led to that

One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.

spongebobstoes 6 hours ago | parent | prev | next [-]

anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems

openai can deploy safety systems of their own making

from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident

this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model

nawgz 6 hours ago | parent [-]

Source?

manmal 4 hours ago | parent | prev | next [-]

Are you saying that everything so far in this administration has been 100% rational?

willis936 2 hours ago | parent | prev | next [-]

>or there's another reason for the loud attempt to blacklist Anthropic

This one is very easy. Trump has a well established pattern of making a loud statement to make it appear he didn't lose, even when he did.

cowsandmilk 3 hours ago | parent | prev | next [-]

> one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that

Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.

az226 4 hours ago | parent | prev | next [-]

And Sam is a habitual liar.

jdiaz97 3 hours ago | parent | next [-]

He literally just got community noted for lying. So much for a non-profit CEO or whatever it is now.

kotaKat 2 hours ago | parent | prev [-]

And an abuser, but they keep covering that one up.

ukblewis 30 minutes ago | parent | prev [-]

They aren’t the same terms. You are clearly an enemy bot or an uneducated fool. OpenAI has agreed to mass surveillance of those who are not Americans. Anthropic refused. OpenAI’s term was a restriction of surveillance not to be on Americans