Remix.run Logo
keiferski 11 hours ago

The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause." There is really no element of self virtue in the way that virtue ethics has..it's just pure calculation.

It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.

That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.

nonethewiser 11 hours ago | parent | next [-]

>The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause."

I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.

some_guy_nobel 11 hours ago | parent | next [-]

You claim OP's interpretation is inaccurate, while it tracks perfectly with many of EA's most notorious supporters.

Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?

RobinL 10 hours ago | parent | next [-]

> many of EA's most notorious supporters.

The fact they're notorious makes them a biased sample.

My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:

- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's

- We should carefully decide which charities to give to, because some are far more effective than others.

That's pretty much it - essentially the message in Peter Singer's book: https://www.thelifeyoucansave.org/.

I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere

Eddy_Viscosity2 7 hours ago | parent [-]

I would say the problem with EA is the "E". Saying you're doing 'effective' altruism is another way of saying that everyone else's altruism is wasteful and ineffective. Which of course isn't the case. The "E" might as well stand for "Elitist" in that's the vibe it gives off. All truly altruistic acts would aim to be effective, otherwise it wouldn't be altruism - it would just be waste. Not to say there is no waste in some altruism acts, but I'm not convinced its actually any worse than EA. Given the fraud associated with some purported EA advocates, I'd say EA might even be worse. The EA movement reeks of the optimize-everything mindset of people convinced they are smarter than everyone else who just say just gives money to a charity A when they could have been 13% more effective if they sent the money directly to this particular school in country B with the condition they only spend it on X. The origins of EA may not be that, but that's what it has evolved into.

estearum 2 hours ago | parent [-]

A lot of altruism is quite literally wasteful and ineffective, in which case it's pretty hard to call it altruism.

> they could have been 13% more effective

If you think the difference between ineffective and effective altruism is a 13% spread, I fear you have not looked deeply enough into either standard altruistic endeavors nor EA enough to have an informed opinion.

The gaps are actually astonishingly large and trivial to capitalize on (i.e. difference between clicking one Donate Here button versus a different Donate Here button).

The sheer scale of the spread is the impetus behind the entire train of thought.

socalgal2 36 minutes ago | parent | prev | next [-]

this feels like “the most notorious atheists/jews/blacks/whites/christian/muslims are bad therefore all atheists/jews/blacks/whites/christian/muslims are bad

cortesoft an hour ago | parent | prev | next [-]

Well, in order to be a notorious supporter of EA, you have to have enough money for your charity to be noticed, which means you are very rich. If you are very rich, it means you have to have made money from a capitalistic venture, and those are inherently exploitive.

So basically everyone who has a lot of money to donate has questionable morals already.

The question is, are the large donators to EA groups more or less 'morally suspect' than large donors to other charity types?

In other words, everyone with a lot of money is morally questionable, and EA donors are just a subset of that.

nl an hour ago | parent [-]

> you have to have made money from a capitalistic venture, and those are inherently exploitive.

You say this like it's fact beyond dispute, but I for one strongly disagree.

Not a fan of EA at all though!

btilly 3 hours ago | parent | prev | next [-]

The OP's interpretation is an inaccurate summary of the philosophy. But it is an excellent summary of the trap that people who try to follow EA can easily fall into. Any attempt to rationally evaluate charity work, can instead wind up rationalizing what they want to do. Settling for the convenient and self-aggrandizing "analysis", rather than a rigorous one.

An even worse trap is to prioritize a future utopia. Utopian ideals are dangerous. They push people towards "the ends justify the means". If the ends are infinitely good, there is no bound on how bad the "justified means" can be.

But history shows that imagined utopias seldom materialize. By contrast the damage from the attempted means is all too real. That's why all of the worst tragedies of the 20th century started with someone who was trying to create a utopia.

EA circles have shown an alarming receptiveness to shysters who are trying to paint a picture of utopia. For example look at how influential someone like Samuel Bankman-Fried was able to be, before his fraud imploded.

jandrese 11 hours ago | parent | prev [-]

It's like libertarianism. There is a massive gulf between the written goals and the actual actions of the proponents. It might be more accurately thought of as a vehicle for plausible deniability than an actual ethos.

glenstein 10 hours ago | parent [-]

The problem is that creates a kind of epistemic closure around yourself where you can't encounter such a thing as a sincere expression of it. I actually think your charge against Libertarians is basically accurate. And I think it deserves a (limited) amount of time and attention directed at its core contentions for what they are worth. After all, Robert Nozick considered himself a libertarian and contributed some important thinking on things like justice and retribution and equality and any number of subjects, and the world wouldn't be bettered by dismissing him with twitter style ridicule.

I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.

Eisenstein 10 hours ago | parent [-]

When a term becomes loaded enough then people will stop using it when they don't want to be associated with the loaded aspects of the term. If they don't then they already know what the consequences are, because they will be dealing with them all the time. The first and most impactful consequence isn't 'people who are not X will think I am X' it is actually 'people who are X will think I am one of them'.

glenstein 10 hours ago | parent [-]

I think social dynamics are real and must be answered for but I don't think any self-correction or lacktherof has anything to do with subject matter which can be understood independently.

I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.

Eisenstein 9 hours ago | parent [-]

I am saying that those who actually believe something won't stick around and associate themselves with the original movement if that movement has taken on traits that they don't agree with.

glenstein 9 hours ago | parent [-]

You risk catastrophe if you let social dynamics stand in for truth.

Eisenstein 8 hours ago | parent [-]

You risk catastrophe if you ignore social indicators as a valid heuristic.

glenstein 10 hours ago | parent | prev | next [-]

I actually think I agree with this, but nevertheless people can refer to EA and mean by it the totality of sociological dynamics surrounding it, including its population of proponents and their histories.

I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.

klustregrif 11 hours ago | parent | prev | next [-]

> EA is about donating your money effectively

For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.

Aunche 10 hours ago | parent [-]

If you're going to assign a discount rate for cash, you also need to assign a similar "discount rate" for future lives saved. Just like investments compound, giving malaria medicine and vitamins to kids who needs him should produce at least as much positive compounding returns.

ghurtado 10 hours ago | parent | prev [-]

I don't see anything in your comment that directly disagrees with the one that you've replied to.

Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.

It's the 21st century version of Papal indulgences.

phantasmish 11 hours ago | parent | prev | next [-]

I’m skeptical of any consequentialist approach that doesn’t just boil down to virtue ethics.

Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.

glenstein 10 hours ago | parent | next [-]

I partly agree with you but my instinct is that Parfit Was Right(TM) that they were climbing the same mountain from different sides. Like a glove that can be turned inside out and worn on either hand.

I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.

You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.

jrochkind1 11 hours ago | parent | prev [-]

What does "virtue ethics" mean?

keiferski 10 hours ago | parent | next [-]

One of the three traditional European philosophy approaches to ethics:

https://en.wikipedia.org/wiki/Virtue_ethics

EA being a prime example of consequentialism.

phantasmish 10 hours ago | parent [-]

… and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)

Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.

TimorousBestie 10 hours ago | parent | prev [-]

The best statement of virtue ethics is contained in Alasdair Macintyre’s _After Virtue_. It’s a metaethical foundation that argues that both deontology and utilitarianism are incoherent and have failed to explain what some unitary “the good” is, and that ancient notions of “virtues” (some of which have filtered down to present day) can capture facets of that good better.

The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.

glenstein 10 hours ago | parent [-]

Probably a topic for a different day, but it's rare to get someone's nutshell version of ethics so concise and clear. For me, my concern would be letting the evolutionary tail wag the dog, so to speak. Utility has the advantage of sustaining moral care toward people far away from you, which may not convey an obvious evolutionary advantage.

And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.

TimorousBestie 9 hours ago | parent [-]

Macintyre doesn’t really involve himself with the evolutionary parts. He tends to be oriented towards historical/social/cultural explanations instead. But yes, this is an issue that any virtue ethics needs to handle.

> Utility has the advantage of sustaining moral care toward people far away from you

Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.

anonymousiam 2 hours ago | parent | prev | next [-]

EA should be bound by some ethical constraints.

Sam Bankman-Fried was all in with EA, but instead of putting his own money in, he put everybody else's in.

Also his choice of "good causes" was somewhat myopic.

Aunche 10 hours ago | parent | prev | next [-]

> It's the perfect philosophy for morally questionable people with a lot of money.

The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.

1vuio0pswjnm7 4 hours ago | parent | prev | next [-]

There's the implication that some altruism may not be "effective"

btilly 3 hours ago | parent | next [-]

What makes it absurd?

If I want to give $100 to charity, some of the places that I can donate it to will do less good for the world. For example Make a Wish and Kids Wish Foundation sound very similar. But a significantly higher portion of money donated to the former goes to kids, than does money donated to the latter.

If I'm donating to that cause, I want to know this. After evaluating those two charities, I would prefer to donate to the former.

Sure, this may offend the other one. But I'm absolutely OK with that. Their ability to be offended does not excuse their poor results.

1vuio0pswjnm7 2 hours ago | parent | prev [-]

https://www.sierraclub.org/sierra/trouble-algorithmic-ethics...

"But putting any probability on any event more than 1,000 years in the future is absurd. MacAskill claims, for example, that there is a 10 percent chance that human civilization will last for longer than a million years."

nxor 11 hours ago | parent | prev | next [-]

SBF has entered the chat

AgentME 10 hours ago | parent [-]

I'm tired of every other discussion about EA online assuming that SBF is representative of the average EA member, instead of being an infamous outlier.

downrightmike 10 hours ago | parent | prev [-]

Its basically the same thing as the church selling indulgences. Didn't matter if you stole the money, pay the church and go to heaven