| ▲ | danbrooks 5 hours ago |
| Props to Dario and Anthropic for taking a moral stand. A rarity in tech these days. |
|
| ▲ | janalsncm 4 hours ago | parent | next [-] |
| Agreed. You don’t have to be an LLM maximalist or a doomer to see the opportunity for real, practical danger from ubiquitous surveillance and autonomous weapons. It would have been extremely easy for Dario to demonstrate the same level of backbone as Sam Altman or Sundar Pichai. |
|
| ▲ | Computer0 4 hours ago | parent | prev | next [-] |
| There is no moral leg to stand on here, he says here in plain english that if they wanted to use CLAUDE to perform mass surveillance on Canada, Mexico, UK, Germany, that is perfectly fine. |
| |
| ▲ | sfink 3 hours ago | parent | next [-] | | This is a public note, but directed at the current administration, so reading it as a description of what is or is not moral is completely missing the point. This note is saying (1) we refuse to be used in this way, and (2) we are going to use "mass surveillance of US citizens" as our defensive line because it is at least backed by Constitutional arguments. Those same arguments ought to apply more broadly, but attempts to use them that way have already been trampled on and so would only weaken the arguments as a defense. If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance. | |
| ▲ | buzzerbetrayed 4 hours ago | parent | prev | next [-] | | Perhaps you just have different moral values? I suspect each of the countries you mentioned spy on us. I also suspect we spy on them. I’m glad an American company wouldn’t be so foolish as to pretend otherwise. | |
| ▲ | hungryhobbit 4 hours ago | parent | prev [-] | | [flagged] | | |
|
|
| ▲ | Fricken 3 hours ago | parent | prev | next [-] |
| We knew long before AI was a twinkle in Amodel's eye that if it were to be built, then it would be co-opted by thugs. Anthropic's statement is little more than pageantry from the knowing and willing creators of a monster. |
|
| ▲ | rvz 5 hours ago | parent | prev | next [-] |
| [flagged] |
| |
| ▲ | ben_w 5 hours ago | parent | next [-] | | For now is all we ever have, unfortunately. I miss the days when the mega-brands whose work I admired, still did such works. | |
| ▲ | Qem 4 hours ago | parent | prev | next [-] | | > Anthropic will betray you for a multi-year government contract worth tens of billions of dollars. What are the odds they will rebrand Misanthropic by then? | |
| ▲ | ternwer 4 hours ago | parent | prev | next [-] | | So you think we should never support them doing something "positive"? What incentive does that give? | |
| ▲ | astrange 4 hours ago | parent | prev [-] | | Anthropic is a PBC and if they violate the terms of that the shareholders (you) can sue them for securities fraud. |
|
|
| ▲ | ekianjo 4 hours ago | parent | prev | next [-] |
| You know this is pure PR right? |
| |
| ▲ | flawn 4 hours ago | parent [-] | | What do you mean? You think Hegseth and Anthropic are doing this for PR reasons? |
|
|
| ▲ | dddgghhbbfblk 4 hours ago | parent | prev | next [-] |
| A moral stand? ... What? Did we read the same statement? It opens right out the gate with: >I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. >Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more. which I find frankly disgusting. |
| |
| ▲ | adastra22 4 hours ago | parent | next [-] | | Freedom isn’t free. Someone has to defend the democratic values that you and I take for granted. Dario’s statement is in support of the institution, not the current administration. | | |
| ▲ | cwillu 3 hours ago | parent | next [-] | | The democratic values I take for granted is under direct threat from the us. Your government is literally funding separatist movements in my country. | |
| ▲ | jackp96 3 hours ago | parent | prev | next [-] | | I mean, obviously. But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending? 9/11? Pearl Harbor? Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries. You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment. | | |
| ▲ | adastra22 3 hours ago | parent [-] | | You have the causality at least partially backwards. Why has it been so long and infrequent that the US has been in direct conflict with authoritarian adversaries? Because we have a giant military and a willingness to use it. Pacifism and isolationism do not work as defensive strategies. | | |
| |
| ▲ | DiogenesKynikos 4 hours ago | parent | prev [-] | | The last time the US defended freedom through military means was WWII. As Abraham Lincoln said, the greatest threat to freedom in America is a domestic tyrant, not a foreign army. | | |
| ▲ | adastra22 3 hours ago | parent [-] | | Korea, Vietnam, Panama, Grenada, Libya, Lebanon, Iraq War I, Somalia, Haiti, Bosnia, Kosovo, Afghanistan, and Iraq War II were all fought for or over democratic ideals & the defense of democratic institutions. All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique. But it is absolutely not the case that the last time the US defended freedom through military means was WWII. |
|
| |
| ▲ | joemi 4 hours ago | parent | prev | next [-] | | They are undeniably taking a moral stand. Among other things, the statement explains that there are two use cases that they refuse to do. This is a moral stand. It might not align with your morals, but it's still a moral stand. | |
| ▲ | tylerchilds 4 hours ago | parent | prev [-] | | I feel like the deepest technical definition of autocratic is “fully autonomous weapons”? |
|
|
| ▲ | bogzz 5 hours ago | parent | prev [-] |
| This is not how the word "moral" should be used in a sentence that also has the name Dario Amodei in it. |
| |
| ▲ | plaidthunder 5 hours ago | parent | next [-] | | Words are cheap. Actions aren't. Dario Amodei is putting his company on the line for what he believes in. That's courage, character and... yes, morality. | | |
| ▲ | sheikhnbake 4 hours ago | parent | next [-] | | I have a feeling this is just a negotiation tactic leveraging public sentiment rather than a stance based on morality. | | |
| ▲ | tfehring 4 hours ago | parent [-] | | It's both - it's clearly at least partly for moral reasons that they're even in the negotiation that they need leverage for. |
| |
| ▲ | bogzz 5 hours ago | parent | prev | next [-] | | I am convinced that Amodei's "morality" is purely performative, and cynically employed as a marketing tactic. Time will tell, but most people will forget his lies. | | |
| ▲ | jstanley 5 hours ago | parent | next [-] | | How should he have acted instead? | | |
| ▲ | khazhoux 4 hours ago | parent | next [-] | | Yeah. “Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.” | |
| ▲ | bogzz 4 hours ago | parent | prev [-] | | We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them. Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing. He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while. Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible. Oh, also the stealing. All the stealing. But he is not alone there by any means. edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did. | | |
| ▲ | astrange 4 hours ago | parent | next [-] | | > to promote his product with the silent implication that LLMs actually ARE a path to AGI That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them. Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral? | |
| ▲ | ternwer 4 hours ago | parent | prev [-] | | His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar. The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging. > The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it. > All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers] ... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand. |
|
| |
| ▲ | janalsncm 4 hours ago | parent | prev | next [-] | | It’s possible Dario is a bad person pretending to be good and Sundar is a good person only pretending to be bad. People argue whether true selflessness exists at all or whether it’s all a charade. But if the “performance” involves doing good things, at the end of the day that’s good enough for me. | |
| ▲ | signatoremo 4 hours ago | parent | prev | next [-] | | Standing up to the US government has real and serious sequence. Peter Hegseth threatened to make Anthropic supply chain risk, meaning not only is Anthropic likely dropped as Pentagon’s supplier, but also risk losing companies doing business with the military as customers, such as Boeing or Lockheed Martin. Whatever tactic you think he is doing, that’s potentially massive revenue lost, at the time they need any business they can get. | | |
| ▲ | chasd00 3 hours ago | parent [-] | | Amazon does business with the DOD/W. That’s a pretty dangerous game of brinkmanship Anthropic is playing. |
| |
| ▲ | startupsfail 4 hours ago | parent | prev [-] | | Don't be evil. |
| |
| ▲ | mvkel 5 hours ago | parent | prev | next [-] | | These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree." | | |
| ▲ | layer8 4 hours ago | parent | next [-] | | The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”. | | |
| ▲ | mvkel 3 hours ago | parent [-] | | The "safeguards" you are referring to are contractual, i.e. words. There are no technical safeguards, per the article. The memo literally says that the reason they have these policies is -because- actual technical guardrails are not reliable enough. |
| |
| ▲ | janalsncm 4 hours ago | parent | prev [-] | | It’s a contract dispute. Contracts are more than just talk. While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place. | | |
| ▲ | mvkel 4 hours ago | parent | next [-] | | Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching. NSA and other three-letter agencies happily do it under cloak and dagger. | |
| ▲ | mhitza 3 hours ago | parent | prev [-] | | What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation? On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook" https://thefulcrum.us/trump-state-control-capitalism |
|
| |
| ▲ | slg 4 hours ago | parent | prev | next [-] | | Is it morality or is it recognizing that providing the brain of autonomous weapons has a non-zero chance of ending up with him on trial in The Hague? | | |
| ▲ | sebzim4500 4 hours ago | parent | next [-] | | This action is far more likely to land him in prison than complying with the pentagon | | |
| ▲ | slg 4 hours ago | parent [-] | | I disagree. There is a class of leaders in this country that is complicit with the administrations use of violence on the tacit understanding that the violence not be directed at them. Arresting one of those people would be an act of desperation that would likely cause the rats to flea the sinking ship. And it isn't even clear if Trump could actually manufacture any charges here. Look at the dropped charges against Mark Kelly and those other politicians as an example. The administration might be able to make up stories to arrest random immigrants and college kids, but they clearly haven't been able to indiscriminately jail powerful political opponents. Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility. |
| |
| ▲ | inigyou 4 hours ago | parent | prev [-] | | The chance is zero. This won't be deployed in countries that he'd want to visit anyway and would extradite him to The Hague. | | |
| ▲ | mobilefriendly 3 hours ago | parent [-] | | In all seriousness The Hague has no jurisdiction over Americans and Congress has already authorized military use of force against Brussels should they ever attempt to prosecute Americans. |
|
| |
| ▲ | verdverm 4 hours ago | parent | prev [-] | | It's not so clear the company is actually on the line. They can compel Anthropic to do what they are not willing to do, maybe, this is not the final act. The government needs to respond, to which Anthropic will need to respond, courts may become involved at that point, depending on if Anthropic acquiesces at that point or not. Make a prominent statement against while in the news cycle, let the rest unfold under less media attention. |
| |
| ▲ | davidw 4 hours ago | parent | prev [-] | | It's a little bit better than so many sniveling, cowardly elites are doing right now. |
|