Remix.run Logo
rustyhancock 12 hours ago

The comment I replied to said that they believed OpenAI would allow "AGI to be used for truly evil purposes".

By contrast Anthropic wouldn't? Yet Anthropics stance is only two narrow restrictions. As I said are those two things the only evil things possible?

If not, why is it that people on HN think Anthropic would not allow evil usage?

My hypothesis is a halo effect. We are so enthralled by Claudes performance that some struggle to rationally assess what Anthropic has actually done.

Yes it's no small thing to say no to the Trump administration but that does not mean they haven't said Yes to otherwise facilitated other evils.

In fact to me the statements from Anthropic seem to make clear they are okay with many evils.

thunky 10 hours ago | parent [-]

> Yet Anthropics stance is only two narrow restrictions.

Really I think Anthropic should have a single restriction: to not assist with illegal or unconstitutional activities. If automated killings etc is illegal then it would be covered by that one rule.

I don't think Anthropic should be in the business of deciding what is "evil".

toss1 8 hours ago | parent [-]

If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business?

Everyone SHOULD continuously consider, decide, and live by moral judgements and codes they internalize, and use to make choices in life.

This aspect of life should NEVER be outsourced — of course, learn from and use codes others have developed and lived by — but ALWAYS consider deeply how it works in your situation and life.

(And no, I do NOT mean use situational ethics, I mean each considering, choosing, and internalizing the codes by which they live).

So, yes, Anthropic and anyone else building products absolutely should be deciding for themselves what they will build, for what purposes it is fit to use, and telling others about those purposes. For products like AI, this absolutely includes deciding what is "evil" and preventing such uses.

If the customer finds such restrictions are not what they want, they ARE FREE to not use the product.

thunky 2 hours ago | parent [-]

> If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business?

This is easy imo. Two methods:

1. The law. It should not be legal for the US Govt to murder people at will. If it is legal, then of course they'll use tools to make it easier. Maybe AI, maybe Clippy. If they can't use AI then they'll fall back to using some other way of doing it like they've already been doing for several years.

2. Voting. For representatives that actually represent us and have our interest in mind rather than their own corrupt interests. And voting with our wallet against companies that do legal but morally bankrupt things.

Of course we're failing both of these hard right now. But imo the answer is not to give up and let corporations make the rules.

In other words, if it were legal for a normal citizen to murder anyone they wanted, of course they'll use Google Maps to help them do that. We don't put restrictions on how people can use Google Maps. Instead we've made murder illegal. We should be doing the same thing here.