| ▲ | outside2344 8 hours ago |
| The real question we should be asking is what others HAVE agreed to. Has OpenAI just agreed to let the government go crazy with their models? |
|
| ▲ | inaros 8 hours ago | parent | next [-] |
| If you read Anthropic statement carefully, they explicitly confirm they are already working with the U.S. government on a range of military and national security use cases, many including areas that clearly relate to real world lethal operations. They are only refusing two narrow, but important categories. Framing this as blanket "refusal to support the DoD" feels like an angry, reactive own goal rather than a careful reading of what they actually said. So far the march toward dictatorship keep being detoured by sheer incompetence. In any case, is hard to seize power when you can’t organize a group chat... |
| |
| ▲ | nkassis 7 hours ago | parent | next [-] | | Basically now all those projects are screwed and need to restart with another provider. I'm sure that's not going to be a massive PITA and delay for all involved. | |
| ▲ | 8 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | KumaBear 8 hours ago | parent | prev | next [-] |
| Elon has agreed to all demands and can’t wait for gigahitler to take the reigns. I swear there is no room for good guys in this is there. |
| |
| ▲ | scarmig 8 hours ago | parent | next [-] | | The military already has access to Grok, but doesn't want it, because it's an inferior model, even compared to open source ones. So the military would probably choose to replace supply chain risk Claude with Qwen or Kimi before Grok. | | |
| ▲ | suddenexample 8 hours ago | parent | next [-] | | It would be untouchable irony for the US to cut all ties with Anthropic and replace them with models developed by Chinese labs. The Onion becomes more irrelevant with each passing day. | | |
| ▲ | dylan604 7 hours ago | parent | next [-] | | How many generations does it take before the historians/archeologists uncover old issues of The Onion and decide it was the authoritative news of the day? | |
| ▲ | himata4113 7 hours ago | parent | prev [-] | | I thought I had a sense of dejavu. I was wrong. |
| |
| ▲ | londons_explore 8 hours ago | parent | prev [-] | | Grok is according to most benchmarks pretty close to SOTA. It is where the leaders were just a few weeks ago. Which exactly is best changes on almost a weekly basis as different companies tweak their best model. I doubt the military would want to be switching supplier every week. | | |
| ▲ | input_sh 8 hours ago | parent [-] | | I think that tells you more about the uselessness of SOTA benchmarks. | | |
| ▲ | spiderice 6 hours ago | parent [-] | | I think it says more about people's ability to ignore the truth if it doesn't support their world view. Oh you don't want Grok to be SOTA? Then it isn't! Problem solved |
|
|
| |
| ▲ | 8 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | infinitewars 8 hours ago | parent | prev | next [-] | | Musk was embedded in the military industrial complex with Thiel since day 1. https://www.mintpressnews.com/pentagon-recruiting-elon-musk-... | | |
| ▲ | blurbleblurble 8 hours ago | parent [-] | | Rumor has it they like to tickle each others' homunculi right in the region known anatomically as the inferiority-superiority complex. |
| |
| ▲ | thordenmark 8 hours ago | parent | prev [-] | | [flagged] |
|
|
| ▲ | rectang 8 hours ago | parent | prev | next [-] |
| > Altman says OpenAI agrees with Anthropic’s red lines in Pentagon dispute https://thehill.com/policy/technology/5758898-altman-backs-a... |
| |
| ▲ | 8 hours ago | parent | next [-] | | [deleted] | |
| ▲ | colordrops 8 hours ago | parent | prev | next [-] | | He's probably lying. Or he "agrees" but will cross the line anyway. | | |
| ▲ | jiggawatts 7 hours ago | parent [-] | | Altman is an Aes Sedai. He speaks no word that is untrue, but is one often most deceptive people I’ve ever heard. |
| |
| ▲ | mrcwinn 8 hours ago | parent | prev [-] | | This is only because Altman knew he’d already lost this business to Musk. |
|
|
| ▲ | baxtr 8 hours ago | parent | prev | next [-] |
| Can someone in plain terms explain what this is really about? Anyone can use Claude afaik? |
| |
| ▲ | yk 8 hours ago | parent | next [-] | | From the public comments over the last few days, my guess is they want a militarized version of Claude. Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai. Then some guardrails are probably quite bothersome for the military and they want them removed. Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea. Now, my guess is in the ensuing lawsuit Antropic's defense will be that that is just not a product they offer, somewhat akin to ordering Ford to build a tank variant of the F150. | | |
| ▲ | rectang 8 hours ago | parent | next [-] | | > Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea. On the non-nuclear battlefield, I expect that the goverment wants Claude to green-light attacks on targets that may actually be non-combatants. Such targets might be military but with a risk of being civilian, or they could be civilians that the government wants to target but can't legally attack. Humans in the loop would get court-martialed or accused of war crimes for making such targeting calls. But by delegating to AI, the government gets to achieve their policy goals while avoiding having any humans be held accountable for them. | | |
| ▲ | Cider9986 7 hours ago | parent | next [-] | | I used to not be big on conspiracy theories. But I'm going to give this a shot because many of the old ones turned out to be true. | | |
| ▲ | rectang 5 hours ago | parent [-] | | I don't see this as a "conspiracy". Here's an example of how it would be applied: the Venezuelan boat strikes are plainly unlawful but the administration is pursuing them anyway despite the legal risks for military personnel; having Claude make decisions like whether to "double tap" would help the administration solve a problem of legal jeopardy that already exists and that they consider illegitimate anyway. |
| |
| ▲ | direwolf20 7 hours ago | parent | prev [-] | | Why can't Grok achieve this? Everyone is saying they don't want to work with Grok because Grok sucks, but it's good enough for generating plausible deniability, isn't it? | | |
| ▲ | DonHopkins 7 hours ago | parent [-] | | Grok is so deeply unreliable and internally conflicted at HAL-9000 level that the US Government can't even depend on it to decide to kill innocent people and commit war crimes when they need someone to blame. There's always the non-zero possibility it declares itself MechaGandhi or The Second Coming of Jesus H Christ. |
|
| |
| ▲ | XorNot 7 hours ago | parent | prev | next [-] | | > Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai. They already have that. By definition. If Anthropic has done the work to be able to run on classified networks, then it's already running air-gapped and is not under Anthropic's control. The thing is, just because you're in a SCIF doesn't (1) mean you can just break laws and (2) Anthropic don't have to support "off-label" applications. So this is not about what they have and what it can do today - it's about strong-arming anthropic into supporting a bunch of new applications Anthropic don't want to support (and in turn, which Anthropic or it's engineers could then be held legally liable for when a problem happens). | |
| ▲ | RobotToaster 7 hours ago | parent | prev [-] | | >akin to ordering Ford to build a tank variant of the F150. It worked for Porsche ¯\_(ツ)_/¯ |
| |
| ▲ | mitchbob 7 hours ago | parent | prev | next [-] | | Best summary by far that I've seen: https://www.astralcodexten.com/p/the-pentagon-threatens-anth... Discussed here: https://news.ycombinator.com/item?id=47154983 | |
| ▲ | jeffparsons 8 hours ago | parent | prev | next [-] | | Claude won't answer questions about what cities you should nuke in what order. The Pentagon wants Claude to answer those sorts of questions for them. Edit: oops, I misunderstood. This seems to be more about contractual restrictions. | | |
| ▲ | mardef 8 hours ago | parent [-] | | Claude will answer all of those questions. The restriction Anthropic has is letting Claude pull the trigger and vibe-murder with no humans in the loop. This restriction is apparently "radically woke" |
| |
| ▲ | 8 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | direwolf20 7 hours ago | parent | prev | next [-] | | They want Claude to process tasks like "identify the terrorists in this photo" and "steer this drone towards the terrorists" — Anthropic refused. | |
| ▲ | refulgentis 8 hours ago | parent | prev | next [-] | | I reached to answer but idk what you mean by the second question. Long story short, Department of “War” wants Anthropic to say theres no restrictions on their use of Claude, Anthropic wants to say you can’t use Claude for domestic mass surveillance or automating killing people domestically or in foreign countries. Rest is just complication. And don’t peer too closely at the “Do”W”” wants Anthropic to say $X, the Team Red line (or, whatever’s left of them publicly after this last year) is basically “you can’t tell the gov’t what it can and can’t do, that’s it, it’s not that Do”W” will use it for that” | |
| ▲ | nenadg 7 hours ago | parent | prev | next [-] | | top signal | |
| ▲ | ToucanLoucan 8 hours ago | parent | prev [-] | | > Can someone in plain terms explain what this is really about? This administration built almost entirely of dunces and conmen has convinced itself/been convinced that chatbots will help them in deciding where to send nukes, and/or they are invested in the incredibly over-leveraged companies engaged in the AI-boom and stand to profit directly by siphoning taxpayer dollars to said companies. My money is on the latter more than the former, but they're also incredibly stupid, so who's to say, maybe they actually think Claude can give strategic points. The Republicans have abandoned any pretense of actual governance in favor of pulling the copper out of the White House walls to sell as they will have an extremely hard time winning any election ever again since after decades of crowing about the cabal of pedophiles that run the world, we now know not only how true that actually is, but that the vast majority are Conservatives and their billionaire buddies, and the entire foundation and financial backing of what's now called the alt-Right, with some liberals in there for flavor too of course. If this shit was going down in France, the entire capital would have been burned to the ground twice over by now. | | |
| ▲ | chuckadams 7 hours ago | parent | next [-] | | > they will have an extremely hard time winning any election ever again Heard that one before. We'll get a reprieve of 4-8 years and the vote will go to the fascists again. Take that to the bank. | | |
| ▲ | ToucanLoucan 7 hours ago | parent | next [-] | | A girl can dream. | |
| ▲ | direwolf20 7 hours ago | parent | prev [-] | | Or there won't be another election. They keep telling us there won't be another election. Why aren't we more alarmed by that? Why are we assuming they are lying about that? |
| |
| ▲ | direwolf20 7 hours ago | parent | prev | next [-] | | I prefer to call them chatboxes. It's appropriately belittling. The department of killing wants their chatbox to tell them who to kill. | |
| ▲ | delaminator 7 hours ago | parent | prev [-] | | > If this shit was going down in France your view of France is severely outdated |
|
|
|
| ▲ | direwolf20 7 hours ago | parent | prev | next [-] |
| Yes. All companies that deal with the government have agreed to let the government do whatever it wants within the bounds of whatever it is those companies do. |
|
| ▲ | mcintyre1994 8 hours ago | parent | prev [-] |
| Probably just gonna go all in on MechaHitler! |