Remix.run Logo
nonethewiser 11 hours ago

Man this is such a loaded term. Even in a comment section about the origins of it, everyone is silently using their own definition. I think all discussions of EA should start with a definition at the top. I'll give it a whirl:

>Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.

What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.

ngruhn 10 hours ago | parent | next [-]

> I think the initial assumption that that definition is good and harmless is just wrong.

Why? The alternative is to donate to sexy causes that make you feel good:

- disaster relief and then forget about once it's not in the news anymore

- school uniforms for children when they can't even do their homework because they can't afford lighting at home

- literal team of full time body guards for the last member of some species

chemotaxis 10 hours ago | parent [-]

That's a strawman alternative.

The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.

If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.

And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.

glenstein 9 hours ago | parent [-]

>it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.

So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.

The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.

pfortuny 10 hours ago | parent | prev | next [-]

On one hand, it is an example of the total-order mentality which impregnates society, and businesses in general: “there exists a single optimum”. That is wrong on so many levels, especially with regards to charities. ETA: the real world has optimals, not an optimum.

Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.

ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.

Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.

Lammy 10 hours ago | parent | prev [-]

It's a layer above even that: it's a way to justify doing unethical shit to earn obscene amounts of money by convincing themselves (and attempting to convince others) that the ends justify the means because the entire world will somehow be a better place if I'm allowed to become Very Rich.

Anyone who has to call themselves altruistic simply isn't lol