Remix.run Logo
yk 6 hours ago

From the public comments over the last few days, my guess is they want a militarized version of Claude. Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai. Then some guardrails are probably quite bothersome for the military and they want them removed. Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea.

Now, my guess is in the ensuing lawsuit Antropic's defense will be that that is just not a product they offer, somewhat akin to ordering Ford to build a tank variant of the F150.

rectang 6 hours ago | parent | next [-]

> Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea.

On the non-nuclear battlefield, I expect that the goverment wants Claude to green-light attacks on targets that may actually be non-combatants. Such targets might be military but with a risk of being civilian, or they could be civilians that the government wants to target but can't legally attack.

Humans in the loop would get court-martialed or accused of war crimes for making such targeting calls. But by delegating to AI, the government gets to achieve their policy goals while avoiding having any humans be held accountable for them.

Cider9986 6 hours ago | parent | next [-]

I used to not be big on conspiracy theories. But I'm going to give this a shot because many of the old ones turned out to be true.

rectang 4 hours ago | parent [-]

I don't see this as a "conspiracy". Here's an example of how it would be applied: the Venezuelan boat strikes are plainly unlawful but the administration is pursuing them anyway despite the legal risks for military personnel; having Claude make decisions like whether to "double tap" would help the administration solve a problem of legal jeopardy that already exists and that they consider illegitimate anyway.

direwolf20 6 hours ago | parent | prev [-]

Why can't Grok achieve this? Everyone is saying they don't want to work with Grok because Grok sucks, but it's good enough for generating plausible deniability, isn't it?

DonHopkins 5 hours ago | parent [-]

Grok is so deeply unreliable and internally conflicted at HAL-9000 level that the US Government can't even depend on it to decide to kill innocent people and commit war crimes when they need someone to blame. There's always the non-zero possibility it declares itself MechaGandhi or The Second Coming of Jesus H Christ.

XorNot 6 hours ago | parent | prev | next [-]

> Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai.

They already have that. By definition. If Anthropic has done the work to be able to run on classified networks, then it's already running air-gapped and is not under Anthropic's control.

The thing is, just because you're in a SCIF doesn't (1) mean you can just break laws and (2) Anthropic don't have to support "off-label" applications.

So this is not about what they have and what it can do today - it's about strong-arming anthropic into supporting a bunch of new applications Anthropic don't want to support (and in turn, which Anthropic or it's engineers could then be held legally liable for when a problem happens).

RobotToaster 6 hours ago | parent | prev [-]

>akin to ordering Ford to build a tank variant of the F150.

It worked for Porsche ¯\_(ツ)_/¯