| ▲ | snickerbockers 2 hours ago | |
It doesn't harm national security, but only so long as it's not in the supply-chain. They can't have Lockheed putting Anthropic's products into a fighter jet when Anthropic has already said their products will be able to refuse to carry out certain orders by their own autonomous judgement. | ||
| ▲ | praxulus an hour ago | parent | next [-] | |
The government can refuse to buy a fighter jet that runs software they don't want. Is it really reasonable to refuse to buy a fighter jet because somebody at Lockheed who works on a completely unrelated project uses claude to write emails? | ||
| ▲ | 8n4vidtmkvmk 26 minutes ago | parent | prev | next [-] | |
That's not what anthropic said. They said their products won't fire autonomously, not that they will refuse when given order from a human. | ||
| ▲ | 9dev 32 minutes ago | parent | prev [-] | |
I’m not sure if you deliberately choose to not understand the problem. It’s not just that Lockheed can’t put Anthropic AI in a fighter jet cockpit, it’s that a random software engineer working at Lockheed on their internal accounting system is no longer allowed to use Claude Code, for no reason at all. A supply chain risk is using Huawei network equipment for military communications. This is just spiteful retaliation because a company refuses to throw its values overboard when the government says so. | ||