| ▲ | mitthrowaway2 2 days ago | |
The broader context of this is that Anthropic did put ethical restrictions into their contract. A bunch of AI employees industry-wide called for solidarity with Anthropic. But then OpenAI, and now Google, defected against this equilibrium and signed contracts agreeing to "any lawful use". The GP was arguing that, first of all, it's not practically possible to put limitations on such a contract, because you can't audit everything the military does. But that argument is bunk, because not only do you not have to audit everything the military does (only what you as a contractor are asked to do), Anthropic also signed exactly such a contract, and the DoW did indeed run into those restrictions and got frustrated by it. Their second argument, that if Google didn't agree then someone less scrupulous would take their place and exert less pushback, is also bunk. Google's pushback is as low as it gets; you can't sign a contract to do something illegal, so agreeing to any lawful use is the loosest possible contract that anybody can sign. And given that they defected in this prisoner's dilemma, they are already the less scrupulous party doing the work that Anthropic would not. | ||