Remix.run Logo
srvo 3 hours ago

Ethics often fold under the face of commercial pressure.

The pentagon is thinking [1] about severing ties with anthropic because of its terms of use, and in every prior case we've reviewed (I'm the Chief Investment Officer of Ethical Capital), the ethics policy was deleted or rolled back when that happens.

Corporate strategy is (by definition) a set of tradeoffs: things you do, and things you don't do. When google (or Microsoft, or whoever) rolls back an ethics policy under pressure like this, what they reveal is that ethical governance was a nice-to-have, not a core part of their strategy.

We're happy users of Claude for similar reasons (perception that Anthropic has a better handle on ethics), but companies always find new and exciting ways to disappoint you. I really hope that anthropic holds fast, and can serve in future as a case in point that the Public Benefit Corporation is not a purely aesthetic form.

But you know, we'll see.

[1] https://thehill.com/policy/defense/5740369-pentagon-anthropi...

DaKevK 3 hours ago | parent | next [-]

The Pentagon situation is the real test. Most ethics policies hold until there's actual money on the table. PBC structure helps at the margins but boards still feel fiduciary pressure. Hoping Anthropic handles it differently but the track record for this kind of thing is not encouraging.

Willish42 2 hours ago | parent | prev [-]

I think many used to feel that Google was the standout ethical player in big tech, much like we currently view Anthropic in the AI space. I also hope Anthropic does a better job, but seeing how quickly Google folded on their ethics after having strong commitments to using AI for weapons and surveillance [1], I do not have a lot of hope, particularly with the current geopolitical situation the US is in. Corporations tend to support authoritarian regimes during weak economies, because authoritarianism can be really great for profits in the short term [2].

Edit: the true "test" will really be can Anthropic maintain their AI lead _while_ holding to ethical restrictions on its usage. If Google and OpenAI can surpass them or stay closely behind without the same ethical restrictions, the outcome for humanity will still be very bad. Employees at these places can also vote with their feet and it does seem like a lot of folks want to work at Anthropic over the alternatives.

[1] https://www.wired.com/story/google-responsible-ai-principles... [2] https://classroom.ricksteves.com/videos/fascism-and-the-econ...