Remix.run Logo
sailfast 2 days ago

This all works if you assume that any action the government takes must be “lawful”. The assumption here is that the Pentagon is obeying the law and any unlawful use would go through normal reporting / violation channels - same as any illegal order or violation or whistleblower report.

The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!

And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.

Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?

gwbas1c 2 days ago | parent [-]

I look at this as a case of "pick your battles."

In war, the civilians can't audit every move of the military. (It's impractical, both for reacting timely, and for keeping secrets from the enemy.)

If the military doesn't work with Google, they will work with someone else who might not put the same amount of pressure on the military about the practical limits on AI. Or, even worse, our enemy might use a significantly better AI that we do.

My hope is that "war" shifts to AI vs AI, machine vs machine. Calling people who work on AI for wartime purposes immoral is fundamentally immoral when AI in war replaces the need for human casulties.

mitthrowaway2 2 days ago | parent | next [-]

As a private contractor, you can sign a contract to deliver pizza or bandages to US soldiers, but also put into the contract that you won't deliver lethal weapons, if that's your own ethical stance. You don't need to audit every move of the military, just the stuff you're doing at their request.

And sure, maybe that just means the military decides to take their business elsewhere. But if you have confidence that your service is the best, then you sell based on that.

eks391 2 days ago | parent [-]

I think you and your parent have great arguments. Your pizza deliverer chose his battle, which was to only deliver pizza, not materiel, and is commendable. Your parent seems to want to delegate death from humans to AI, which seems to me like a simplification that won't turn out exactly like that, but the premise of deciding whether that is a battle to pick is valid. If you want to start blurring the lines between the analogy and literality, if you choose to pick every battle to fight, there's not enough human bandwidth to do it all, and delegation to AI could be helpful. That last sentence is more loose, so I won't defend it, but I couldn't help not making a tie between picking your battles and literal battles. Perhaps a form of dark humor there.

mitthrowaway2 2 days ago | parent [-]

The broader context of this is that Anthropic did put ethical restrictions into their contract. A bunch of AI employees industry-wide called for solidarity with Anthropic. But then OpenAI, and now Google, defected against this equilibrium and signed contracts agreeing to "any lawful use".

The GP was arguing that, first of all, it's not practically possible to put limitations on such a contract, because you can't audit everything the military does. But that argument is bunk, because not only do you not have to audit everything the military does (only what you as a contractor are asked to do), Anthropic also signed exactly such a contract, and the DoW did indeed run into those restrictions and got frustrated by it.

Their second argument, that if Google didn't agree then someone less scrupulous would take their place and exert less pushback, is also bunk. Google's pushback is as low as it gets; you can't sign a contract to do something illegal, so agreeing to any lawful use is the loosest possible contract that anybody can sign. And given that they defected in this prisoner's dilemma, they are already the less scrupulous party doing the work that Anthropic would not.

ajam1507 2 days ago | parent | prev [-]

It shouldn't be the role of a company to hold their nose and work with the government, it should be the government's role to inspire confidence that what they are doing with the technology is ethical.

> Calling people who work on AI for wartime purposes immoral is fundamentally immoral when AI in war replaces the need for human casulties.

This is naive. It will only reduce casualties for the side with the AI, and will very likely embolden countries to fight more wars.