| ▲ | thunky 10 hours ago | |||||||
> Yet Anthropics stance is only two narrow restrictions. Really I think Anthropic should have a single restriction: to not assist with illegal or unconstitutional activities. If automated killings etc is illegal then it would be covered by that one rule. I don't think Anthropic should be in the business of deciding what is "evil". | ||||||||
| ▲ | toss1 8 hours ago | parent [-] | |||||||
If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business? Everyone SHOULD continuously consider, decide, and live by moral judgements and codes they internalize, and use to make choices in life. This aspect of life should NEVER be outsourced — of course, learn from and use codes others have developed and lived by — but ALWAYS consider deeply how it works in your situation and life. (And no, I do NOT mean use situational ethics, I mean each considering, choosing, and internalizing the codes by which they live). So, yes, Anthropic and anyone else building products absolutely should be deciding for themselves what they will build, for what purposes it is fit to use, and telling others about those purposes. For products like AI, this absolutely includes deciding what is "evil" and preventing such uses. If the customer finds such restrictions are not what they want, they ARE FREE to not use the product. | ||||||||
| ||||||||