| ▲ | piker 6 hours ago |
| > The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted. OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products. I personally can agree with both, and I do believe that the Administration's behavior towards Anthropic was abhorrant, bad-faith and ultimately damaging to US interests. |
|
| ▲ | bertil 6 hours ago | parent | next [-] |
| Can their solution recommend to shoot at combatants lost at sea? This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times. More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge? |
| |
| ▲ | godelski 3 hours ago | parent | next [-] | | > More succinctly: who decides what is legal here?
Why are people concentrating on legality? Look at the language | The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.
It's not just "legal". Their usage just needs to be consistent with one of - legal
- operational requirements
- "well-established safety and oversight protocols"
Operational requirements might just be a free pass to do whatever they want. The well established protocols seems like a distraction from the second condition. > who decides what is [consistent with operational requirements] here?
The Secretary of Defense. The same person who has directed people to do extrajudicial killings. Killings that would be war crimes even if those people were enemy combatants.There's also subtle language elsewhere. Notice the word "domestic" shows up between "mass" and "surveillance"? We already have another agency that's exploited that one... | |
| ▲ | fluidcruft 6 hours ago | parent | prev | next [-] | | The more relevant question is who is held accountable for the war crimes? OpenAI seem pretty confident it won't be OpenAI. I can see the logic if we were talking about dumb weapons--the old debate about guns don't kill people, people kill people. Except now we are in fact talking about guns that kill people. | |
| ▲ | saghm 5 hours ago | parent | prev [-] | | > This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times. > More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge? Yeah, there's a pretty strong case that anyone claiming to trust that the administration cares about operating in good faith with respect to the law is either delusional or lying. |
|
|
| ▲ | coffeefirst 6 hours ago | parent | prev | next [-] |
| Wait, one of those contracts says you may not build the Terminator. The other says you may build the Terminator if the DOD lawyers say it’s okay. This is a major distinction. |
| |
|
| ▲ | _alternator_ 5 hours ago | parent | prev | next [-] |
| The language allows for the DoD to use the model for anything that they deem legal. Read it carefully. It begins “The Department of War may use the AI System for all lawful purposes…” and at no point does it limit that. Rather, it describes what the DOW considers lawful today, and allows them to change the regulations. As Dario said, it’s weasel legal language, and this administration is the master of taking liberties with legalese, like killing civilians on boats, sending troops to cities, seizing state ballots, deporting immigrants for speech, etc etc etc. Sam Altman is either a fool, or he thinks the rest of us are. |
| |
|
| ▲ | NickNaraghi 6 hours ago | parent | prev | next [-] |
| That language is not consistent with: > No use of OpenAI technology to direct autonomous weapons systems |
| |
| ▲ | piker 6 hours ago | parent | next [-] | | That depends on whether you view the cited authorities as already prohibiting that usage. I don't have an opinion on that, but some folks on both sides of the isle might have strong arguments that they do. | | |
| ▲ | tensor 6 hours ago | parent [-] | | It's still not consistent. OpenAI made a statement that simply isn't true. They agree to all lawful use, INCLUDING using it to deploy weapons as long as it's legal. It happens to not be legal at the moment, but that doesn't mean it can't be changed and authorized. | | |
| ▲ | piker 6 hours ago | parent [-] | | That's a fair point, and I'm not so much defending sama's statements after the fact but rather trying to rationalize the OpenAI position. | | |
| ▲ | pamcake 4 hours ago | parent | next [-] | | OpenAI and sama are literally sauing they are fine with facilitating (and even performing) any scale of killing and surveillance as long as they're not held accountable. | |
| ▲ | miltonlost 6 hours ago | parent | prev [-] | | Rationalize the OpenAI position? Sam Altman gets money from DoD. He has no morals. He doesn't care if people die because of his product. It's not hard. |
|
|
| |
| ▲ | purple_ferret 6 hours ago | parent | prev | next [-] | | We live in a world of Trump-esque "truths" where if you claim something once, nothing subsequent matters. Not surprised to see a guy like Altman adopt the strategy | |
| ▲ | 6 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | avaer 5 hours ago | parent | prev | next [-] |
| The word "legal" is doing all of the heavy lifting. Considering the countless adjudicated illegal things that the government is doing publicly. What happens behind classified closed doors? I guess you can consider it a moral stance that if the government constantly does illegal things you wouldn't trust them to follow the law. I know that's not what Anthropic said but that's the gist I'm getting. |
| |
| ▲ | kivle 4 hours ago | parent [-] | | Does legal include international law, which the US has broken numerous times the last two days? | | |
| ▲ | soraminazuki 3 hours ago | parent [-] | | > This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding. https://constitution.congress.gov/constitution/article-6/ |
|
|
|
| ▲ | notepad0x90 6 hours ago | parent | prev | next [-] |
| No, this very devious and insidious. What the executive branch believes is legal is the real agreement here. Trump can say anything is legal and that's that. There is no judicial overview, there are no lawyers defending the rights of those who are being harmed. Trump can tell the pentagon "everyone in minnesota is a potential insurrectionist, do mass surveillance on them under the patriot act and the insurrection act". Mass surveillance doesn't require a warrant, that's why they want it, that's why it's "mass". warrants mean judicial overview. Anthropic didn't disagree with surveillance where a court (even a FISA court!!) issued a warrant. Trump just doesn't want to go through even a FISA court. This is pure evil from Sam Altman. Is anyone listing these peoples names somewhere for posterity's sake? I'd hate to think this would all be forgotten. From Altman to Zuckerberg, if justice prevails they'll be on the receiving end of retribution. |
| |
| ▲ | piker 6 hours ago | parent | next [-] | | That view does seem to be consistent with Anthropic's. It's sad if true, since it implies a belief that the system cannot be just in modern contexts. | | |
| ▲ | notepad0x90 6 hours ago | parent [-] | | mass surveillance is explicitly unlawful in the US. it is in the bill of rights. By definition it is injustice under the law. Even for terrorists in the US they have to go through a FISA court and get warrants. Consider this, the bill of rights stipulates that a soldier cannot be stationed on your property in times of peace, but in times of war it will be allowed. It makes exceptions for times of war. but even in times of war, 4th amendment's search and seizure protection don't have an exception. Even in times of insurrection and rebellion. To deliberately violate that for personal and political reasons, that in itself is treason. With that intent alone, even without action, it invalidates all legitimacy that government has. If a clause in a contract is broken, the contract is broken. The bill of rights is the contract between the people and their government that gives the government its powers to rule, in exchange for those rights. With the contract explicitly, deliberately and with provable malicious intent broken, the whole agreement is invalidated. I'll even say this, the US military itself is on the hook if they stand by and let this happen. | | |
| ▲ | kelseyfrog 6 hours ago | parent | next [-] | | On the hook for what? The current US government has a fundamentally different ontology for the derivation of human rights. Wheras you and I likely agree that human rights are inalienable due to them being derived from the universe nature of human experience, the administration believes that human rights begin and end with them, the state. When they're the one able to affect the world with violence, it doesn't matter who's on the hook. The US electorate thought they could heal a status wound by authoritarianism instead of therapy and everyone else is paying the price. | | |
| ▲ | notepad0x90 2 hours ago | parent [-] | | On the hook for whatever comes after. Best case scenario, democrats will peacefully take control again, and pretend to forget about Sam's complicity. But he'll still face civil suits, I hope personally as well as the company itself. Wort case, the current admin will make nazis look like cosplayers, and within a decade or so, he'll be standing next other ceos facing a tribunal in front of whatever entity managed to topple the former regime, and it will be under warcrime terms that are yet to be defined and for atrocities, which if history teaches us anything, will be so horrific our current ability to imagine antrocities is insufficient to allows to speculate on their nature. In short, whatever trump does with openai, Sam Altman is in the "whatever trump wants to do was lawful" camp. Even then, perhaps the next regime will fail to learn from history and focus on rebuilding, but if they do learn from history they'll understand that you really can't hold back when it comes to these things. We're in this mess because of failure to sufficiently punish the nazis and the confederates in the US, both of which lasted only for about half a decade by the way. it isn't enough to teach people how horrible nazis and confederates were, the German approach is sensible, but a more extreme approach might be required. Funny thing is, this might just save openai from total collapse. But if this is the price to keeping the economy alive, even at my own personal cost I hope the economy collapses completely along with these companies and regime. | | |
| ▲ | kelseyfrog 2 hours ago | parent [-] | | I'm so sorry, but the closure of justice will never occur. The United States is incompatible with its existence. As much as a third reconstruction is desperately needed, my desire for its existence is not materially tied to it being rendered into the world. |
|
| |
| ▲ | Nevermark 6 hours ago | parent | prev | next [-] | | > I'll even say this, the US military itself is on the hook if they stand by and let this happen. That would most definitely not be the Constitutional recourse. Or a sensible approach. If that happens, the Constitution is past tense. Congress and the Supreme Court are the recourse. If they don't hold up the Constitution then violence or even a non-violent military coup, however well intended, are not going to put the splattered egg back together again. The last two and a half decades have seen all four presidents, congress, the Supreme Court and both parties allow blatantly unconstitutional surveillance become the norm (evolving an adaptive fig leaf of intermediaries), and presidential military actions entirely blur out the required Congressional oversight. That the weakening of loyalty to the Constitution has been pervasive on those serious counts, is one of the reasons it has been so easy to undermine further. When governing bodies become familiar with the convenient practice of "deciding" what the constitution means, without repercussions, that lost respect becomes very hard to reinstate. | | |
| ▲ | notepad0x90 2 hours ago | parent [-] | | They swore an oath to defend the constitution of the US against enemies both foreign and domestic. It is entirely lawful for them to fulfill that duty. If the commander in cheif and the civilian administration are clearly and unquestionably violating the constitution, they are no longer legitimate. If they are acting to harm the american people, acting as agents of a foreign enemy or as a domestic enemy to harm the american people, then they are not only illegitimate but the military is oath-bound to fight them with necessary force. > That the weakening of loyalty to the Constitution has been pervasive on those serious counts, is one of the reasons it has been so easy to undermine further. I can agree with that, that is because the people who swore an oath to defend it have not done so. They wave flags like it's a sports team they're cheering for. Ultimately, the design of the constitution is such that either the people taking arms, or a patriotic military resisting the government would serve as the ultimate recourse. The system of checks and balances works so long as consequences are still a thing. If in the 1800s a president decided to do half the things trump did, anyone could shoot his face off and get away with it without consequence. These things aren't practical anymore. The military has the duty to resist unlawful orders. But if a russian agent usurped the US government and civilians are incapable of doing something about it, then that's what they're there for. The military doesn't exist to bomb foreign countries thousands of miles away, it is there to defend the homeland. The original idea was that if laws are no longer a thing (obeyed by the government) the lawlessness would be too terrifying for those in power, therefore lawfulness is in their interest. |
| |
| ▲ | piker 6 hours ago | parent | prev [-] | | Right, which is probably the point made by the negotiators on behalf of the US Government. "We don't want Anthropic's standard, we want the Constitution." | | |
| ▲ | notepad0x90 6 hours ago | parent [-] | | Maybe I'm misunderstanding but are you taking the gov's side? Anthropic's standard was the constitutions. The executive branch has no authorization under US law to perform surveillance of any kind on its own. OpenAI will now be breaking US law, Anthropic simply decided to obey US law. The US government can update its laws and come back to Anthropic, or do what they just did | | |
| ▲ | piker 5 hours ago | parent [-] | | No, I'm not taking the government's side. I'm telling the government's side. That's probably true that the executive branch can't do those things, but it may be able to do so in the future. Thus, Anthropic's rule would then be inconsistent with the laws applying to the government. > The US government can update its laws and come back to Anthropic No, this I do take issue with. It's the people who update the U.S. government's laws. | | |
| ▲ | notepad0x90 3 hours ago | parent [-] | | the people via their elected reps.. the government. The government is of the people and by the people. They're not different if democracy is truly working. > but it may be able to do so in the future. You don't obey laws in the future, you obey laws today. Companies have an obligation to follow the laws as written today. Not only that, as americans they and all americans have a patriotic and civic duty to resist attempts to bypass or undermine the constitution of their country. You literally can't be patriotic or loyal to your country without doing so, it is what constitutes the country. It's not like Anthropic can't update their guardrails and contracts once the laws of the land are updated. They simply resisted a criminal and treasonous abuse of power. |
|
|
|
|
| |
| ▲ | jstummbillig 6 hours ago | parent | prev | next [-] | | > Trump can tell the pentagon "everyone in minnesota is a potential insurrectionist, do mass surveillance on them under the patriot act and the insurrection act". This is just incoherent. You can't have US companies fix an unhinged US government. If the government runs wild, there are some serious questions to be asked at a state level, about how that could happen, how to fix it quickly and how to prevent it in the future – but I should hope none of them concern themselves with the ideas of individual company owners, because if the government can de fact do what it wants regardless of legality the next thing that this government does could simply be pointing increasingly non-metaphorical guns at individual AI company functionaries. | | |
| ▲ | notepad0x90 3 hours ago | parent [-] | | > This is just incoherent. You can't have US companies fix an unhinged US government. Which part? No one expects them to fix the government, matter of fact they should stay far away from it. However, they have a duty to obey the law and to be patriotic. All companies must resist attempts by the government to betray its people, because the government derives its authority from the people, therefore in its betrayal it has become an illegitimate enemy of the people instead of their legitimate government. > because if the government can de fact do what it wants regardless of legality the next thing that this government does could simply be pointing increasingly non-metaphorical guns at individual AI company functionaries. It feels like you and half the country never even at least watched movies surrounding nazi germany. The government can do whatever it wants, but whether it is companies, individuals working for it, or soldiers under orders, the government's authority does not excuse their participation. The government can't do anything at all on its own, it needs people to do it. If Obama wanted to get Anthropic to let their models aid al-qaeda with attacking America, should Anthropic say "oh well, since you're the government, go ahead?" This is the same thing. Ever heard of the phrase "enemies foreign or domestic" in the swearing of oaths? Company executives are beholden to the laws of the country they operate in. I mean, with Nazis at least their orders, and the orders of companies under their regime was lawful, even then it was not an excuse but they just changed the laws to make their orders lawful. Right now, we have laws and the government is breaking it, even "i followed lawful orders" isn't an excuse. Sam Altman is complicit in the violation of the American constitution and the betrayal of its people. If all else fails, I expect the government to just train their own models. In which case, I'd say the engineers working in that effort should have resisted. |
| |
| ▲ | s5300 6 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | saghm 6 hours ago | parent | prev | next [-] |
| > OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products. What if Anthropic's morals are "we won't sell someone a product for something that it's not realistically capable of doing with a high degree of success? The government can't do what something if it's literally impossible (e.g. "safe" backdoors in encryption), but it's legal for them to attempt even when failure is predetermined. We don't know that's what's going on here, but you haven't provided any evidence that's sufficient to differentiate between those scenarios, so it's fairly misleading to phrase it as fact rather than conjecture. |
| |
| ▲ | pamcake 4 hours ago | parent [-] | | Isn't it more accurate here to consider OpenAI and Anthropic as service providers rather than a manufacturer of product? | | |
| ▲ | LoganDark 4 hours ago | parent [-] | | The service they provide is on-premises deployment, I guess. But what they are deploying is a product. | | |
| ▲ | pamcake 2 hours ago | parent [-] | | The relevant (unanswered?) question for this thread is who's operating and managing that deployment, and to what extent provider (or subcontracted FDEs) is involved in integrations. I would be surprised to learn of deployment actually being independently operated. Sure the machinery can be considered a product but associated service- and support engagements are at least as relevant to take into account. |
|
|
|
|
| ▲ | donmcronald 6 hours ago | parent | prev | next [-] |
| Does the US have any laws that require human control of autonomous weapons? Isn’t that a contradiction? |
|
| ▲ | serial_dev 5 hours ago | parent | prev | next [-] |
| Didn't fully follow the saga, but isn't their "imposing their own morals" is that "we do not want to allow you to let our AI go on an unsupervised killing spree"? |
|
| ▲ | twobitshifter 6 hours ago | parent | prev | next [-] |
| Even if the autonomous weapon systems ‘perform as intended’, this does not in any way mean that they are not an enormous danger. Secondly, as that is department policy and not a law or regulation, they appear to be saying that the cited directive is presently the only thing standing between the DOD and the use of autonomous weapons. If that’s the case how hard is it to change or alter a directive? |
|
| ▲ | lkey 6 hours ago | parent | prev | next [-] |
| The United States Military, in its official capacity, has been performing illegal, extrajudicial assassinations of civilians in international waters for months now. We have been sharing technology and weapons with Israel while it prosecutes a genocide in contravention of both US and International law. We are currently prosecuting a war on Iran that is illegal under both US and International law. Any aid given to such a force is to underwrite that lawlessness and it shows a reckless disregard for the very notion of a 'nation of laws'. When OpenAI says, 'The Military can do what is legal', full in the knowledge that this military has no interest in even pretextual legality, one has to wonder why you hold that you 'agree with' both of these decisions. Do you believe the flimsiest of lies in other aspects of your life? |
|
| ▲ | Hamuko 6 hours ago | parent | prev | next [-] |
| And who decides what's legal? The US was collecting illegal tariff revenue for ten months. Does OpenAI need to wait for the Supreme Court to strike down autonomous killbots? |
| |
| ▲ | notepad0x90 6 hours ago | parent | next [-] | | That's the devil in the details. Sam altman's insult upon injury, treating the public as idiots on top of being a collaborator. The answer to your question is the government decides what is legal, as in the executive branch, in the pentagon the commander in chief decides. So essentially, they can do whatever they want so long as they call it legal. As I said in a sibling comment, mass surveillance cannot be considered legal in the US under any context. not even war, emergency, terrorism, nuclear strike, national security reasons, imminent danger to the public,etc.. targeted surveillance can, scoped surveillance of a group of people can, but not mass surveillance. In other words Sam Altman is saying "This thing can never be legal short of a constitutional amendment, but so long as trump says it is, we'll look the other way". What a two-faced <things i can't say on HN> this guy is! I really hope Google poaches all his top engineers. If any of you are reading this, I ask you this, I get working for money, but will Google or Anthropic offer you all that much less? Consider the difference in pay when you put a price on your conscious. | | |
| ▲ | soraminazuki 2 hours ago | parent [-] | | Google? They have a terrible track record on upholding moral principles. They helped Chinese censorship, wrote software for American killer drones, and offered their services to genocidal regimes. They fired dissenting employees. They are one of the worst companies to be rooting for. | | |
| ▲ | notepad0x90 2 hours ago | parent [-] | | This isn't about moral principles. In china, censorship is legal. In the US mass surveillance is not. Even for those "genocidal regimes", it was lawful use. even now, both anthropic and openai agree that their models can be used in war and censorship just like with china, since those things are lawful. Even with genocide, from what i understand, the safeguard is that humans have to be in the loop, not that it won't aid the efforts. I don't expect companies to be moral, but I do expect them to be patriotic, and to obey the law. And I also expect the government to punish them sufficiently when they fail to do so. The morality part is for the people to legislate or some other way enact laws to reflect their beliefs. Companies don't get a vote at the ballot box and they certainly are not agents for moral arbitrage between a government and its people. |
|
| |
| ▲ | piker 6 hours ago | parent | prev [-] | | Yes, I think that would be the idea. Again, not my view, but we give police officers license to use lethal force and often the victims of their abuse of that power have no recourse because they're already dead. |
|
|
| ▲ | rendx 6 hours ago | parent | prev | next [-] |
| > OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products. Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"? What happened to "We give each other the freedom to hold beliefs and act accordingly unless it does harm"? How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need? That sounds like you're buying into the reversed victim and offender narrative. And this is not about whether one agrees with their beliefs. It is about giving others the right to have their own. |
| |
| ▲ | coeneedell 6 hours ago | parent | next [-] | | I have the right not to sell poison to someone who I have reason to believe will use it to kill a third party. The idea of simply trusting the patron to be responsible makes sense when the patron is anonymous or a new contact. It’s generally good to assume good intentions in the absence of evidence, I think. If the government is not anonymous enough to get this treatment. | | |
| ▲ | jxf 5 hours ago | parent [-] | | Governments have a long, long history of using "poison to kill a third party", to use your analogy. |
| |
| ▲ | marcellus23 6 hours ago | parent | prev | next [-] | | The GP's use of the word "impose" didn't seem perjorative to me or suggest that Anthropic is the offender and the government is the victim. I think you're reading a lot into a simple word choice and this response seems way too hostile. | | |
| ▲ | jdgoesmarching 5 hours ago | parent | next [-] | | Are you really going to pretend that “impose their morals” is a completely value-neutral statement? | | |
| ▲ | piker 5 hours ago | parent | next [-] | | It certainly was intended as such. In a commercial transaction, that's what they're doing. They don't think it's moral to use their product in certain ways. They are thus prohibiting their customer from using it in such ways. But, as I've said, I tend to agree with both Anthropic and the Administration's positions. What was wrong here is that rather than just terminating the contract, the Administration went nuclear. | |
| ▲ | crazygringo 5 hours ago | parent | prev | next [-] | | It seems value-neutral to me. It's descriptive. Particularly for anyone who understands that different groups of people will legitimately disagree on many moral questions. | |
| ▲ | kcplate 5 hours ago | parent | prev [-] | | What would be the value neutral way to phrase it? | | |
| ▲ | AntiDyatlov 5 hours ago | parent [-] | | "Anthropic wanted its product to not be used in ways that contradict its ethics". "Impose" makes it sound like Anthropic is being hostile here. And also, I don't think this is a situation that calls for moral relativism. |
|
| |
| ▲ | hn_throwaway_99 5 hours ago | parent | prev [-] | | A "simple word choice"?? This isn't just about the single word "impose", read the whole post: > Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted. > OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products. So first off, regarding that first paragraph, didn't any of these idiots watch WarGames, or heck, Terminator? This is not just "oh, why are you quoting Hollywood hyperbole" - a hallmark of today's AI is we can't really control it except for some "pretty please we really really mean it be nice" in the system prompt, and even experts in the field have shown how that can fail miserably: https://www.tomshardware.com/tech-industry/artificial-intell... Second, yes, I am relieved Anthropic wanted to "impose" their morals because, if anything, the current administration has been loud and clear that the law basically means whatever they says it does and will absolutely push it to absurd limits, so I now value "legal limits" as absolutely meaningless - what is needed are hard, non-bullshit statements about red lines, and Anthropic stood by the those, and Altman showed what a weasel he is and acceded to their demands. |
| |
| ▲ | ApolloFortyNine 5 hours ago | parent | prev | next [-] | | >Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"? >How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need? The department of defense in particular has a law on the books allowing them to force a company to sell them something. They generally are more than willing to pay a pretty penny for something so it hardly needs used, but I'd be shocked if any country with a serious military didn't have similar laws. So your right when it comes to private citizens, but the DoD literally has a special carve out on the books. A lawsuit challenging it would have actually been insane from anthropic because they would have had to argue "we're not that special you can just use someone else" in court. A more clear example would be, what would you expect to happen if Intel and amd said our chips can't be used in computers that are used in war. | | |
| ▲ | convolvatron 4 hours ago | parent [-] | | buts it not a national emergency. its not a time of war. and there is a different between demanding to be customer, and demanding that you change your products because they would like them to be a different way. that is actual conscription. for many decades, the DoD has used a carrot to get what they want. this is a stick. |
| |
| ▲ | 6 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | rozal 6 hours ago | parent | prev | next [-] | | [dead] | |
| ▲ | morkalork 6 hours ago | parent | prev | next [-] | | [flagged] | | |
| ▲ | lkey 6 hours ago | parent [-] | | I'd like to order one remedial first amendment education for this rage baiting user, who appeared fully formed from a conservative forum circa 2008. |
| |
| ▲ | nickysielicki 5 hours ago | parent | prev [-] | | Nobody is saying that Anthropic has to shut down. They’re just saying that nobody taking government money can pay Anthropic for their service as a part of that contract. Anthropic still has the right to exist on their own terms, but their business model is based on rapidly-increasing enterprise subscriptions, which included public sector spending. If Anthropic can survive on open source contributors shelling out $200/mo and private sector companies doing the same, the government wishes them well. But surely you agree the government has a right to determine how its budget is appropriated? | | |
| ▲ | specialp 5 hours ago | parent | next [-] | | Well it depends. Being that the federal government constitutes 20% of the US economy, telling federal agencies you cannot contract with someone because they are adversarial to the USA is indeed pretty severe. When in reality they are not adversarial. We have no choice but to pay taxes and make the federal government 20 percent of our economy. There is no single company or any other entity that is close. And extending it to everyone who has a government contract probably makes it the majority of the economy. So it is not at all equivalent to a private company making a choice | | |
| ▲ | nickysielicki 5 hours ago | parent [-] | | > When in reality they are not adversarial. This is obviously subjective, and the only subject that matters in this case is the leadership at the DoD. > We have no choice but to pay taxes and make the federal government 20 percent of our economy. There is no single company or any other entity that is close. And extending it to everyone who has a government contract probably makes it the majority of the economy. I, too, hate big government and the all-powerful executive branch. Welcome to my tent. Let’s invent a time machine together so we can elect Ron Paul in 2008 and nip this in the bud. Until then, this is what we’re stuck with. |
| |
| ▲ | rootusrootus 5 hours ago | parent | prev [-] | | > But surely you agree the government has a right to determine how its budget is appropriated I think the government doesn't have rights, it is my elected representative. And I do not agree with it trying to punish a company for not agreeing to contract terms. |
|
|
|
| ▲ | 827a 5 hours ago | parent | prev [-] |
| My interpretation of the difference is more like: Anthropic wanted the synchronous real-time authority to say "No we wont do that" (e.g. by modifying system prompts, training data, Anthropic people in the loop with shutdown authority). OpenAI instead asked for the asynchronous authority to re-evaluate the contract if it is breached (e.g. the DoD can use OpenAI tech for domestic surveillance, but there's a path to contract and service termination if they do this). If my read is correct: I personally agree with the DoD that Anthropic's demands were not something any military should agree to. However, as you say, the DoD's reaction to Anthropic's terms is wildly inappropriate and materially harmed our military by forcing all private companies to re-evaluate whether selling to the military is a good idea going forward. The DoD likely spends somewhere on the order of ~$100M/year with Google; but Google owns a 14% stake in Anthropic, who spends at least that much if not more on training and inference. All-in-all, that relationship is worth on the order of ~$10B+. If Google is put into the position of having to decide between servicing DoD contracts or maintaining Anthropic as an investee and customer, its not trivially obvious that they'd pick the DoD unless forced to with behind-the-scenes threats and the DPA. Amazon is in a similar situation; its only Microsoft that has contracts large enough with the DoD where their decision is obvious. Hegseth's decision leaves the DoD, our military, and our defense materially weaker by both refusing federal access to state of the art technology, and creating a schism in the broader tech ecosystem where many players will now refuse to engage with the government. Either party could have walked away from negotiations if they were unhappy with the terms. Alternatively: the DoD should have agreed to Anthropic's red lines, then constrained/compartmentalized their usage of Anthropic's technology to a clearly limited and non-combat capacity until re-negotiation and expansion of the deal could happen. Instead, we get where we're at, which is not good. IMO: I know a lot of people are scared of a fascist-like future for the US, but personally I'm more fearful of a different outcome. Our government and military has lost all of its capacity to manufacture and innovate. Its been conceded to private industry, and its at the point where private industry has grown so large that companies can seriously say "ok, we won't work with you, bye" and it just be, like, fine for their bottom line. The US cannot grow federal spending and cannot find a reasonable path to taxing or otherwise slowing down the rise of private industry. We're not headed into fascism (though there are elements of that in the current admin): We're headed into Snow Crash. The military is just a thin coordination layer of operators piecing together technology from OpenAI, Boeing, Anduril, Raytheon. Public governments everywhere are being out-competed by private industry, and in some countries it feels like industry tolerates the government, because it still has some decreasing semblance of authority, but especially in the US that semblance of authority has been on a downward trend for years. Google's revenue was 7% of the US Federal Government's revenue last year. That's fucking insane. What happens when we get to the point where Federal debt becomes unserviceable? When Google or Apple or Microsoft hit 10%, or 15%? Our government loses its ability to actually function effectively; and private industry will be there to fill the void. |