| ▲ | rustyhancock 14 hours ago |
| I know this is necessarily a very unpopular opinion however. I think HN in particular as a crowd are very vulnerable to the halo effect and group think when it comes to Anthropic. Even being generous they are only very minimally a "better actor" than OpenAI. However, we are so enthralled by their product that we tend to let the view bleed over to their ethics. Saying we want out tools used in line with the US constitution within the US on one particular point. Is hardly a high moral bar, it's self preservation. All Anthropic have said is: 1. No mass domestic surveillance of Americans. 2. No fully autonomous lethal weapons yet. My goodness that's what passes for a high moral standard? Really anything that doesn't hit those very carefully worded points is not "evil"? |
|
| ▲ | JauntyHatAngle 14 hours ago | parent | next [-] |
| Lets generalise a bit more here - every company at any time could completely heel-turn and do awful things. Even my favourite private companies (e.g. Valve) have done things that I would consider evil. However, I would think I'm not alone in that I'm generally wanting to do good while also wanting convenience, I know that really every bit of consumption I do is probably negative in some ways, and there is no real "apolitical" action anyone can take. But can't I at least get annoyed and take my money somewhere else for the short amount of time another company is doing it better? Yes, if openAI suddenly leaps forwards with codex and pounds anthropic into the dust, I'll likely switch back despite my moral grievances, but in a situation where I can get mildly motivated to jump over for something that - to me - seems like a better morality without much punishment to me, I'll do it. |
| |
| ▲ | bluGill 11 hours ago | parent [-] | | There are no universial morals. Anything - everything you think is evil some culture (possibly in history) thinks is good). I can't even think of something good that I'm confident everyone would agree is good. there are some people (companies are run by people) that are so bad I boycott them. Most bad I treat like society cannot work without accepting them anyway. | | |
| ▲ | codechicago277 an hour ago | parent [-] | | There’s no possibility or need for morality to be universal, and societies have improved their ethics many times throughout history. Your take is nihilistic and presupposes that moral progress isn’t possible, even though we’ve seen objective moral progress many times. |
|
|
|
| ▲ | earthnail 14 hours ago | parent | prev | next [-] |
| Well, they did stand up to the US administration and lost a lot of money in the process. That takes courage. They clearly were being bullied into compliance, and they stood their ground. You can see the significance of this is you look at German Nazi history. If more companies had stood up to the administration, the Nazi state would have been significantly harder to build. In my opinion, what Anthropic did is not a small thing at all. |
| |
| ▲ | rustyhancock 12 hours ago | parent [-] | | The comment I replied to said that they believed OpenAI would allow "AGI to be used for truly evil purposes". By contrast Anthropic wouldn't? Yet Anthropics stance is only two narrow restrictions. As I said are those two things the only evil things possible? If not, why is it that people on HN think Anthropic would not allow evil usage? My hypothesis is a halo effect. We are so enthralled by Claudes performance that some struggle to rationally assess what Anthropic has actually done. Yes it's no small thing to say no to the Trump administration but that does not mean they haven't said Yes to otherwise facilitated other evils. In fact to me the statements from Anthropic seem to make clear they are okay with many evils. | | |
| ▲ | thunky 10 hours ago | parent [-] | | > Yet Anthropics stance is only two narrow restrictions. Really I think Anthropic should have a single restriction: to not assist with illegal or unconstitutional activities. If automated killings etc is illegal then it would be covered by that one rule. I don't think Anthropic should be in the business of deciding what is "evil". | | |
| ▲ | toss1 8 hours ago | parent [-] | | If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business? Everyone SHOULD continuously consider, decide, and live by moral judgements and codes they internalize, and use to make choices in life. This aspect of life should NEVER be outsourced — of course, learn from and use codes others have developed and lived by — but ALWAYS consider deeply how it works in your situation and life. (And no, I do NOT mean use situational ethics, I mean each considering, choosing, and internalizing the codes by which they live). So, yes, Anthropic and anyone else building products absolutely should be deciding for themselves what they will build, for what purposes it is fit to use, and telling others about those purposes. For products like AI, this absolutely includes deciding what is "evil" and preventing such uses. If the customer finds such restrictions are not what they want, they ARE FREE to not use the product. | | |
| ▲ | thunky 2 hours ago | parent [-] | | > If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business? This is easy imo. Two methods: 1. The law. It should not be legal for the US Govt to murder people at will. If it is legal, then of course they'll use tools to make it easier. Maybe AI, maybe Clippy. If they can't use AI then they'll fall back to using some other way of doing it like they've already been doing for several years. 2. Voting. For representatives that actually represent us and have our interest in mind rather than their own corrupt interests. And voting with our wallet against companies that do legal but morally bankrupt things. Of course we're failing both of these hard right now. But imo the answer is not to give up and let corporations make the rules. In other words, if it were legal for a normal citizen to murder anyone they wanted, of course they'll use Google Maps to help them do that. We don't put restrictions on how people can use Google Maps. Instead we've made murder illegal. We should be doing the same thing here. |
|
|
|
|
|
| ▲ | jacquesm 12 hours ago | parent | prev | next [-] |
| It's not high. But it is higher. |
| |
| ▲ | rustyhancock 12 hours ago | parent [-] | | We'll take anything we can right now. I agree. Although we shouldn't let that mean we misjudge what we are actually getting. | | |
| ▲ | jacquesm 12 hours ago | parent [-] | | As a rule when there are large companies and/or billionaires involved you are in for trouble. |
|
|
|
| ▲ | ekianjo 11 hours ago | parent | prev [-] |
| Let's not forget they also lobby to forbid models from China and pretend that distillation is stealing. but somehow just because they said no to two points the majority of HN folks think them as virtuous. |