| ▲ | ern_ave 4 hours ago | |
> US Department of War wants unfettered access to AI models I think the two of you might be using different meanings of the word "safety" You're right that it's dangerous for governments to have this new technology. We're all a bit less "safe" now that they can create weapons that are more intelligent. The other meaning of "safety" is alignment - meaning, the AI does what you want it to do (subtly different than "does what it's told"). I don't think that Anthropic or any corporation can keep us safe from governments using AI. I think governments have the resources to create AIs that kill, no matter what Anthropic does with Claude. So for me, the real safety issue is alignment. And even if a rogue government (or my own government) decides to kill me, it's in my best interest that the AI be well aligned, so that at least some humans get to live. | ||