| ▲ | whatsupdog 8 hours ago | ||||||||||||||||
[flagged] | |||||||||||||||||
| ▲ | jackp96 8 hours ago | parent | next [-] | ||||||||||||||||
Any documentation regarding the claim about breaking their contract? Haven't heard that. Regardless, as someone who works with these models daily (as well as company leadership that loves AI more than they understand it) - Anthropic is absolutely right to say that the military shouldn't be allowed to use it for lethal, autonomous force. | |||||||||||||||||
| ▲ | roxolotl 8 hours ago | parent | prev | next [-] | ||||||||||||||||
The United States has freedom of speech. The Supreme Court has ruled that money is speech. A company can always direct their money, speech, however they like with regards to the government. Can you be sued for breach of contract? Sure. Is it a supply chain risk absolutely not. | |||||||||||||||||
| ▲ | ImPostingOnHN 8 hours ago | parent | prev [-] | ||||||||||||||||
> They are a "supply chain risk" if they can willy-nilly break their contract with US govt and enforce arbitrary rules to service. It is the US govt that seeks to break their contract with Anthropic. The contract they signed had the safeguards, so they were mutually agreed upon. These safeguards against fully autonomous killbots and AI spying of US citizens was known before signing. This conflict now is because the US govt regrets what they agreed to in the contract. | |||||||||||||||||
| |||||||||||||||||