| ▲ | earthnail 14 hours ago | |||||||||||||||||||||||||
Well, they did stand up to the US administration and lost a lot of money in the process. That takes courage. They clearly were being bullied into compliance, and they stood their ground. You can see the significance of this is you look at German Nazi history. If more companies had stood up to the administration, the Nazi state would have been significantly harder to build. In my opinion, what Anthropic did is not a small thing at all. | ||||||||||||||||||||||||||
| ▲ | rustyhancock 12 hours ago | parent [-] | |||||||||||||||||||||||||
The comment I replied to said that they believed OpenAI would allow "AGI to be used for truly evil purposes". By contrast Anthropic wouldn't? Yet Anthropics stance is only two narrow restrictions. As I said are those two things the only evil things possible? If not, why is it that people on HN think Anthropic would not allow evil usage? My hypothesis is a halo effect. We are so enthralled by Claudes performance that some struggle to rationally assess what Anthropic has actually done. Yes it's no small thing to say no to the Trump administration but that does not mean they haven't said Yes to otherwise facilitated other evils. In fact to me the statements from Anthropic seem to make clear they are okay with many evils. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||