Remix.run Logo
chemotaxis 10 hours ago

That's a strawman alternative.

The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.

If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.

And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.

glenstein 9 hours ago | parent [-]

>it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.

So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.

The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.