| ▲ | skissane 12 hours ago | ||||||||||||||||||||||
You have to distinguish between political rhetoric (“woke”) and the substance of the dispute The substance: traditionally, defense contracts don’t have clauses in them limiting what the military can do with the acquired technology. If Boeing or Lockheed Martin or Northrop Grumann sell a missile system to the Pentagon, they don’t try to impose contractual limits on who the Pentagon can fire the missiles at. Now, for some types of contracts - e.g. contracts to provide personnel - the Pentagon is used to contractual terms limiting uses - but not for hardware or software used in weapons systems / military planning / etc. Along comes Anthropic, who argue AI is a fundamentally different technology, to which the old rules shouldn’t apply - they want contractual terms prohibiting certain uses (autonomous weapon systems without human in loop; domestic mass surveillance). The Biden admin buys the argument and agrees to those novel contractual terms. The Trump admin takes over and objects to them, demands they be renegotiated. I think it was primarily a matter of principle and power-“software vendors don’t get to tell us what we can and can’t do”-rather than some immediate plan to do things the contract prohibits. OpenAI negotiated a contract which replicated those terms-but with the proviso that the terms only apply insofar as they reiterate existing legal limits. Anthropic was objecting to that as a meaningless fudge-“we promise not to do X if X is illegal” is very weak, especially when contracting with the government-Congress could change the law tomorrow, or the government’s lawyers could change their interpretation of it, or an appellate court decision could impose a new understanding of it. | |||||||||||||||||||||||
| ▲ | throw1234567891 11 hours ago | parent | next [-] | ||||||||||||||||||||||
> Congress could change the law tomorrow, or the government’s lawyers could change their interpretation of it, or an appellate court decision could impose a new understanding of it. And then it becomes legal. It’s not an empty argument, it simply means “someone higher than you took an initiative”. | |||||||||||||||||||||||
| ▲ | JumpCrisscross 12 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> Anthropic, who argue AI is a fundamentally different technology They’re arguing it’s a service. I think Aramark could refuse to contract to provide employees to the U.S. military for a campaign on Chicago. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | jpadkins 11 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Congress passing a changed law, and it holding up in court is how it's supposed to work. The people's reps (specifics interpreted by the courts) should be the ones that set the standard on as a country what type of weapons systems we want to deploy vs. what is immoral. Precedent is nerve agent weapons, landmines, etc. Honestly, Anthropic's stance feels like an oligarch stance. We have better morals than the American people, we will decide what weapons systems the military will use or not use. It's perfectly understandable if they don't want to sell weapons to the government. That is a noble thing. But Anthropic wanted that DoW money and wanted to determine what is moral vs. not | |||||||||||||||||||||||
| ▲ | bigyabai 12 hours ago | parent | prev [-] | ||||||||||||||||||||||
> rather than some immediate plan to do things the contract prohibits. It's not like any legally questionable kidnappings or bombing campaigns were being planned at the time, right? | |||||||||||||||||||||||
| |||||||||||||||||||||||