Remix.run Logo
eckelhesten 7 hours ago

Hard decision by Anthropic, but at least they can sleep well at night knowing their products doesn’t kill human beings around the world.

Gigachad 7 hours ago | parent | next [-]

That’s the crazy thing. This whole dispute was over Anthropic saying no to fully automated kill bots. They only required there be a human in the loop to press the button.

fluidcruft 7 hours ago | parent | next [-]

Anthropic didn't even say "no", it was more of a "not yet, let's work on this".

I really wonder what Palantir's role in all this is because domestic surveillance sounds exactly like Palantir and whatever happened during the Maduro raid led to Anthropic asking Palantir questions which the news reports is the snowball that escalated to this.

spuz 5 hours ago | parent [-]

Could you expand on that Anthropic asking Palantir connection to this?

fluidcruft 4 hours ago | parent [-]

This is a summary from Gemini of the news reporting:

Recent news reports from February 2026 indicate that a significant rift developed between Anthropic and the Department of War (Pentagon) following the capture of Venezuelan President Nicolás Maduro in January 2026.

According to a report by the Wall Street Journal (referenced by TRT World and others on February 14–15, 2026), the controversy originated when an Anthropic employee contacted a counterpart at Palantir Technologies to inquire about how Claude had been used during the raid. Key Details of the Reports:

* Discovery of Use: Anthropic reportedly became aware that its AI model, Claude, was used in the classified military operation through its existing partnership with Palantir. This was allegedly the first time an Anthropic model was confirmed to be involved in a high-profile, classified kinetic operation.

* The Inquest: The Wall Street Journal and Semafor reported that an Anthropic staff member reached out to Palantir to ask for specifics on Claude's role. This inquiry reportedly "triggered the current crisis" because it signaled to the Pentagon that Anthropic was attempting to monitor or place "ad hoc" limits on how its technology was being used in active missions.

* The Confrontation: During a recent meeting between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth, the inquiry to Palantir was a point of contention. Hegseth reportedly claimed Anthropic had raised concerns directly to Palantir about the Caracas raid. Amodei has since denied that the company raised objections to specific operations, characterizing the exchange with Palantir as a routine technical follow-up or a "self-serving characterization" by Palantir.

* Current Status: This friction has escalated into a public showdown. Today, Friday, February 27, 2026, reports indicate that the Trump administration has officially designated Anthropic a "supply chain risk" and ordered federal agencies to cease using Claude after the company refused to remove guardrails related to autonomous weaponry and mass domestic surveillance.

The primary reporting you are likely recalling comes from The Wall Street Journal (approx. February 14, 2026) and was later expanded upon by Semafor regarding the specific communications between Anthropic and Palantir employees.

matheusmoreira 7 hours ago | parent | prev | next [-]

They also said no to fully automated AI domestic surveillance. I suppose non-US citizens like me are screwed but that's at least some small comfort for the natives. FVEY will just spy on each other and share but at least someone tried.

cperciva 7 hours ago | parent | prev | next [-]

There were two red lines, as I understand it -- first, automated kill bots, and second, mass surveillance.

mediaman 7 hours ago | parent | next [-]

Mass domestic surveillance of American citizens (they were OK with surveillance of other countries).

ted_dunning 7 hours ago | parent | prev | next [-]

No. There was only one red line.

Bend over and take or not.

goatlover 7 hours ago | parent | prev | next [-]

Neither of those red lines should be controversial. What American citizen thinks terminators and Big Brother are desirable?

ks2048 7 hours ago | parent | next [-]

MAGA (as long as the terminators are pointed towards the other side)

dboreham 7 hours ago | parent | prev | next [-]

Citizen 1?

SonOfKyuss 7 hours ago | parent | prev [-]

The ones that still assume big brother will be spying on and killing the people they hate. Trump openly campaigned on getting revenge on his enemies. I can only assume his supporters want this. The danger of course is if/when the leopards eat their faces

Gigachad 7 hours ago | parent | prev [-]

I guess the problem for Trump is if he orders the army to gun down protesters, there’s a good chance they will refuse to do it. While a bot can just be prompted to go ahead.

nazgul17 7 hours ago | parent | next [-]

This one here is the future I am most scared of.

delaminator 6 hours ago | parent | prev [-]

Yeah, but imagine if it were true

IAmGraydon 6 hours ago | parent | prev | next [-]

I think it’s far more likely this is about the other sticking point- using it to spy on US citizens.

whatsupdog 7 hours ago | parent | prev | next [-]

[flagged]

next_xibalba 7 hours ago | parent | prev [-]

If we were able to give the Ukrainians fully automated kill bots, and those kill bots enabled Ukraine to swiftly expel the Russians from their territories, would that not be a good thing? Or would you rather the meat grinder continue to destroy Ukraine's young men to satisfy some moral purity threshold?

If we could give Taiwan killbots that would ensure China could never invade, or at least could never occupy Taiwan, would that be good or bad? I have a feeling I know what the Taiwanese would say.

While we're at it, should we also strip out all the machine learning/AI driven targeting systems from weapons? We might feel good about it, but I would bet my life savings that our future adversaries will not do the same.

eckelhesten 7 hours ago | parent | next [-]

You seem to see everything from a binary perspective. China bad, Taiwan good. Russia bad, Ukraine good.

The world is more nuanced than that.

But to answer your question. No we should not give anyone automatic kill bots. Automatic kill bots shouldn’t even be a thing.

next_xibalba 7 hours ago | parent [-]

Yes, I think Russia's invasion of Ukraine is quite clearly a binary Russia=bad, Ukraine=good. Same for the impending Chinese invasion of Taiwan. Perhaps you could explain the nuances under which Russia was the good guy? Better yet, maybe you could explain it to the Ukrainians who have been displaced, or the family members of those who have been killed, or the soldiers who have been permanently maimed?

Whether you or I like it or not, automatic kill bots will be a thing. It will only be a question of which countries have them and which do not.

trollbridge 6 hours ago | parent [-]

And there is evidence automated killbots were already used in Gaza (not that that's a good thing).

Generally, in war, there are no rules, and someone is going to make automated killbots, and I expect one place to see them quite soon is in the Russia-Ukraine war. And yes, I'm hoping the good guys use them and win over the bad guys. And yes, there are good guys and bad guys in that conflict.

dryarzeg 6 hours ago | parent | prev | next [-]

Ukrainian young (24 y.o.) man here. Living and working in police 30 kilometres away from the actual frontline.

No, thanks, we don't need those "fully automated kill bots". There's absolutely no guarantee that they wouldn't kill the operator (I mean, the one who directs them) or human ally.

We're pretty much fine with drone technology we have.

But for me personally, that's not the most important point. What is more important - and what almost no one in the Western countries seems to realise (no offence, but many of westerners seem to be kind of binary-minded: it's either 0xFFFFFF or 0x000000, no middle ground at all) - is that on the Russian side, soldiers are not "fully automated kill bots" either. Sure, there's a lot of... let's say - war criminals. Yes, for sure. But en masse they are the same young men that you can see on the Ukrainian side. Moreover, many people in Ukraine have relatives in Russia, and there already were the cases where two siblings were in different armies, literally fighting with each other. So in my opinion, "fully automated kill bots" are not an option here. At least unless you deploy them in Moscow and St. Peterburg to neutralize all of the Russian elites, military commandment and other decision-making persons of the current regime.

kevinh 7 hours ago | parent | prev [-]

The thing about building fulling automated kill bots is then you've built fully automated kill bots.

next_xibalba 7 hours ago | parent [-]

Fully automated kill bots are coming, whether any of us like it or not. The question is, which militaries will have them, and which militaries will be sitting ducks? China is pursuing autonomous weapons at full speed.

Personally, I think it'd be great to have the Anthropic people at the table in the creation of such horrors, if only to help curb the excesses and incompetencies of other potential offerings.

jmward01 7 hours ago | parent | prev | next [-]

'yet'. Their reason for not allowing autonomous weapons usage was it isn't ready, not that they wouldn't do it on principle. Only the surveillance objection was on principle.

tomp 6 hours ago | parent | prev | next [-]

A bit of a cop-out, don't you think?

They still pay taxes, which fund the US government, which kills innocent human beings around the world...

UltraSane 7 hours ago | parent | prev | next [-]

I don't think it was that hard because if they had caved a LOT of employees would have quit.

chasd00 7 hours ago | parent | prev [-]

Sleep well in a box under the overpass maybe. If Amazon can’t serve Anthropics model until the courts get everything figured out it will be too late for them.