86 Comments
Jan 25, 2023Liked by Jesse Singal

Philosopher: I need you to fuck a dog.

EA: Done

Philosopher: No, wait, it's gotta be a dead dog.

EA: Hold on a second... done.

Philosopher: Great, this will be a really useful thought experiment for philosophy.

EA: A what for what?

Expand full comment

I am skeptical of EA and utilitarianism in general, for various unoriginal reasons... but also a GiveWell donor. They're great! I don't think you really need to be a utilitarian or any particular philosophy to think, "money should be given to charities that prove they are spending it well", and that seems to be their basic operating principle, so good on 'em.

I sometimes wonder if maybe someone at GiveWell, through a utilitarian calculation said, "if we start supporting the weird causes EA gets into, we will start losing donors. So we're obligated to support good, popular causes as effectively as possible to ensure people who aren't necessarily true believers in EA help us continue to do our good work". And if they did, I'd be fine with that; it's working.

Expand full comment

I think an important part of EA is getting people to think about other people outside their own country. Years ago when I was first starting to work in infectious disease research, I would occasionally have people tell me I should be doing cancer research instead of ID research because cancer was a "real problem". Infectious diseases were not something Americans thought about aside from HIV or something that makes you sick enough to have to stay home from work. Cancer, on the other hand, was killing people. Over the next several years, probably due to the Gates foundation's advocacy, Americans understood that infectious disease was a real problem.

It's also good that they're pushing "International health" to be efficient. Malaria and worms are terrible scourges and they can severely affect kids' development. There has been a tendency for international giving to focus on things that make the donors feel good over stuff that helps the recipients. Charities loved to give baby incubators to hospitals in low income countries. These incubators took up space, required electricity that wasn't always reliable, couldn't be easily repaired due to lack of parts and trained technicians and weren't used that often because there weren't that many babies that needed to be in an incubator. If those charities wanted to help preterm babies, they would be better off treating pregnancy malaria because it can lead to preterm birth and all sorts of negative outcomes.

Expand full comment

Great article, Jesse. I have a jumble of thoughts.

First of all, I used to love EA. And then longtermism came around and I'm pretty disgusted by them. The best critique I saw is that somehow a bunch of tech nerds convinced themselves that their hobby of studying AI was _waaaaay_ more important than saving starving people and, actually, the most important thing anyone on earth could be doing. Yeah, that kind of hubris rubs me the wrong way and reeks of intelligent confirmation bias. The fact that MacAskill, Toby Ord, et al have all jumped on board just sours me on the whole endeavor.

Having said that, like you I still donate significant amounts of money to _certain_ EA causes (currently GiveDirectly). You didn't mention this but it's worth highlighting that GiveWell has intentionally decided to stay focused on global health. They've eschewed long termist causes.

There was a really good episode of Clearer Thinking a while back that had pro and cons of EA and it was nuanced and healthy and grown-up. It highlighted some of the other problems I've had, such as intense guilt at not living up to the ideal (I can't seem to get over emotional eating enough to be vegan, for example). To the point where I just stopped giving when I was going through more-or-less a mental breakdown. Oddly I'm a better person when I'm more moderate. I'm healthier now, giving more, and failing on the absolute utilitarian terms (eg. I still eat animal products).

But, as you point out, that's a utilitarian issue and doesn't mean one shouldn't donate to GiveWell or GiveDirectly.

Expand full comment

I think arguments about the repugnant (prison-planets) scenario. are flawed and fairly easy to rebut from a utilitarian perspective.

The reason we find the scenario repugnant is that we imagine such planets to be filled with suffering. But, assuming we are talking about hedonic utilitarianism (i.e., we measure utility in terms of positive and negative emotional experiences), lives on those planets would need to be mostly pleasant, otherwise creating such planets would not increase utility. We obfuscate this by talking about scenarios where life is "barely tolerable'. Toleration of suffering is not a positive experience. To increase utility, these lives would need to be "barely happy".

It is then true that you could increase utility by creating more and more lives that are barely happy (say, 50.5% happy and 49.5% miserable). But it also follows that if improved material conditions could enhance the quality of those lives to, say, 60% happy and 40% miserable, that would justify reducing the population by up to 95% in order to achieve those improved conditions (because each individual now experiences 20X as many units of utility during their lifetime).

In the end, most philosophical problems like this flow from the fact that there is no purely objective way to quantify hedonic utility or compare qualitatively different emotional experiences. We have to make subjective judgements about, for example, how many barely pleasant lives generate the same utility as one deeply fulfilling life, or whether providing a small amount of pleasure to large numbers of people could ever justify imposing immense suffering on one person. But there is nothing that forces us to make these judgements in ways that lead to morally repugnant conclusions.

Expand full comment

"I’d love for Aleem to list some belief systems that bad faith actors cannot easily hijack."

Very right.

Overall agree that Givewell is generally good, but I have deep reservations about EAS and its acolytes/adherents, particularly of the Singer/McCaskill bent.

The fundamental issue is that an ethical theory that says your neighbor or nephew should be just as moral relevant to you as some child with no connection to you in the Congo, well that ethical theory simply isn't going to work with typical well functioning humans.

It is a viewpoint that works if you abstract yourself out of society and view the world hyper rationally as if god or an alien on the moon. But from that same viewpoint there is no reason to act ethically whatsoever, humans are just a big ant hill, with as much moral value as ants, and there is no reasons to care about individual or even groups of humans at all.

It is an ethical philosophy that balances on a knifes edge of severe mental dysfunction and amoralism. Abstract away most everything that makes us moral, except don't take the final step and abstract away the unique moral value of humans/animals.

Look I got no problem with helping people in Africa. If you want to help people in Africa help people in Africa. But don't claim that it is somehow ethically "better"/required to conduct moral actions/behaviors as though time/distance/personal relationships were not relevant. Those things absolutely are relevant and are the foundation of ethics functioning.

Expand full comment

There is no perfect moral philosophy, unfortunately. Most people are a mix of the big three (utilitarianism, deontology, and virtue ethics) without regard for the inevitable contradictions. Usually that is good enough for most of us. What else can you do except drive yourself crazy?

Expand full comment

Speaking as a mostly-utilitarian, none of those ostensibly utilitarian policies seem like they would actually increase overall welfare if people like me adopted them.

I mean, does anyone actually think that global welfare would be higher if they went around murdering people to steal their organs, and only refrains due to a deontological commitment to do no harm regardless of the consequences? The world is a big place, but I'd be pretty surprised to meet such a person, and it's hardly adding epicycles to a theory to reject a plan on the basis that it's bad!

Expand full comment

You always have interesting thoughts.

To be honest, as a philosopher myself (in the sense of having a degree in the stuff, never been a theoretician), I find that the problem here is a conflation of two very different things: the philosophy of Utilitarianism/Effective Altruism and the organizations that may use some of those ideas and catchwords to produce good results in the here and now... these are two very different things.

Philosophies, in general (even the minimalist ones), are rather terrifying structures that pretend to explaining and direct the whole of reality. There is none I believe, even among those in which I have found the most enlightening truths, that does not end up in a very bad place if brought to its ultimate consequences. There are some in which much is worthwhile, if taken in perspective; there are many in which a few points make good sense, pruned of excesses; and there are a lot that only are exercises of logic gone amok, which leave us either crazed with a sense of superiority or filled with a wretched state of despondency.

And I believe that Effective Altruism, as a philosophy, is among the latter. I find McAskill repulsive on in intellectual and personal level, for example. Reading Peter Singer makes me think of Big Brother in Plato's Republic.

But choosing charities on the basis of their transparency and effectiveness in getting results with the funds employed, does not to mean to subscribe to a philosophy (except that very minimal acceptation of philosophy that means "strategy driven by principles")

To categorise a charity like GiveWell according to the philosophy of Effective Altruism is, in my opinion, misleading (and sort of an attempt to give justification against the bad mental associations that Effective Altruism produces). Cost-effectiveness and producing as much result as possible per funds spent may be related to Effective Altruism but it is surely not the philosophy in toto -- in fact, the About of GiveWell does not mention Effective Altruism or even "altruism" at all.

It is I think the same problem that is evident in the habit of conflating -- especially in the USA -- socialist and social-democratic parties and Marxism as a whole. There are many old and completely respectable parties, in my native Europe, that have their roots in radical and Marxist philosophies, that have been or even still are members of the Socialist International, and yet are far from revolutionary, have led their respective countries for repeated legislatures in democracy and wealth, and do not subscribe either to the obsolete philosophy of Marxism or the rigid economic theory of Marxism (I think of the German SPD, the UK Labour, the French PSF, etc). Elements of Marxism can be found in their ideas, as far as these are useful in the present society, but they cannot in any way be called Marxist.

The same I think is true with charities like GiveWell and Effective Altruism.

Expand full comment
Jan 25, 2023·edited Jan 25, 2023

What’s annoying about these critiques is the lack of effort to even engage with the logic of EA. Yeah, it encourages people to take high paying jobs and donate rather than go into non-profits. That’s because a) the supply of starry-eyed altruists vastly outstrips the demand and b) most non profits are not-particularly-effective money sinks. Unless you’re some sort of charity-management wunderkind, 10% of your doctor or software engineer salary can probably do more good in the world than handing out tshirts at charity walks.

As even one of the critics you cite notes, utilitarianism is often correct locally. Sure, don’t build your whole life around strict utilitarianism, but if you use evidence about “what does more good” from a consequentialist standpoint it will usually steer you better than “common sense” moralizing or any of the various aesthetic based philosophies of more popular charitable movements.

I get really mad about all the money wasted on Komen style “awareness campaigns”, about all the malnourished kids in places that ban GMO foods, about all the carbon dumped into the atmosphere and petro-state dictators appeased because “glowing rock scary and gross”. EA isn’t perfect but it at least points in the direction of effective solutions.

Expand full comment

My main issue with EA is that it's blatantly speciesist. GiveWell promises to "search for the charities that save or improve lives the most per dollar," but they mean only *human* lives. Not plants and not animals, all of which can be adversely affected by adding more human life to the planet or upgrading the life of existing humans. Which I guess is fine with someone who doesn't want his money going to alleviate the suffering of wild fish (though this attitude seems strange when that someone is a vegetarian). Also, it seems kind of short-sighted to save as many lives as possible but then not spend any money making sure the planet remains habitable for current and/or future people.

Expand full comment

Utilitarianism is just one type of consequentialism, and I think EA works with any kind of consequentialism. As long as you have a value system that judges actions mainly by their outcomes, that’s compatible with EA, and I doubt most in the movement are hardcore utilitarians.

It really is hard to overstate EA’s impact on evaluating charities by their outcomes, as opposed to dumb metrics like “% of revenue spent on services vs operating costs”

Expand full comment

Isn't all of this utilitarianism controversy just good marketing for a basic idea around getting the most bang for your buck when trying to attenuate bad things? It's striking me as just a talking point to keep EA discussed.

Expand full comment

I'm not a Utilitarian, but I like GiveWell and donate to it. Most people will agree that saving children from malaria is a good thing to do regardless of their other moral beliefs.

The more substantial criticism of GiveWell that I've heard is political rather than philosophical. The argument goes: "As long as the people that we are saving from malaria are still subjected to corrupt, exploitative governments, any benefits that rich Westerners transfer to them will be short-lived. The political leaders will use their power to extract from the populace any benefits that outsiders send over to them." Thus, the claim is that GiveWell's focus on individuals and its indifference to politics leads it to overestimate the "effectiveness" of its altruism.

To me, this is a more compelling than rehashing old debates about Utilitarianism. But I don't know if it is right. It would be interesting to know if anyone has studied this.

Expand full comment

warning, quarter-baked thoughts follow. please don't read:

I think the insane surgeon is correct in some sense but the reason we don't agree with the conclusion is because of social intuitions placed in our human brains by evolution. We have an aversion to making people do things against their will and also an aversion against people who want to kill people and it can both be true that the surgeon who is capable of carrying out his plan is not likely to be the type of person that garners a lot of reciprocal social capital, while still being the case that the total outcome would be better. But just like the outcome of a polar bear eating a seal alive isn't good in utilitarian terms, we also know that the polar bear can't be blamed in any sense for killing a seal.

Imagine a case where you are some kind of god and get to pick one of two timelines to be the true timeline for a particular universe. In one, Persons A, B, C and D die early deaths that shave off an average of 30 years off their lifespans and Person E lives a full life. In another, Person E dies 30 years early in their sleep and Persons A-D live full lives.

By choosing the second timeline to be the one that exists you are making what to me feels like an obviously correct choice and it's the exact same choice that the crazy surgeon makes. I can't think of a reason why it feels correct for the god to make the decision but feels wrong for the surgeon to make the same choice that has the same outcomes. In both cases Person E doesn't consent to having their lifespan shortened so it can't be a violation of Person E's rights that explains the difference. In the surgeon case it's the case that absent any intervention Persons A-D will have shortened lives, but even if we re-frame the question so that's the default option in the god case it doesn't change how hard it would be to choose, and it doesn't seem right that the status quo should have any weight on what the moral choice is.

There's practical considerations that get in the way of making the crazy surgeon idea a good real-world policy: people will seek revenge, people will change their lives and be paranoid of organ thievery thereby eroding trust and reducing the effectiveness of various social institutions, surgeries aren't 100% successful in the real world, people are not equally worthy of receiving new organs or are not perceived to be equally worthy by others, etc etc. But I think without these real world downstream effects being included in the calculus, it favours the surgeon's case.

In the real world, where the real long term practical effects of this policy would actually exist it means the utilitarian choice is actually to not enact or encourage the crazy surgeon policy into law. I think I can just bite the bullet and agree with the surgeon is morally correct but I'm fine with hypocrisy on this point for me or anyone who ends up being Person E, or anyone who's friends with or related to Person E.

Evolution gave us a self preservation instinct and an instinct to protect our close relations and harm those who would do harm to them, and these aspects of us explain our aversion to the surgeon's case. I wouldn't want to be rid of those things because they are definitional to being a human. think we have to accept that we cannot be morally perfect with human brains and not swim against the stream.

If it were possible to change our natures it might be good to make everyone more altruistic such that a crazy surgeon is not required because anyone who could save more than 1 life with their organs would want to do so. That point is moot though because if we had the technology to change our nature like that we'd probably be able to just grow a new liver for anyone too.

Expand full comment

I know this is beside the point, but I always find it weird when people DON'T count non-existent lives as much as existent ones, though I have a theory about why. I think there are "happiness is good" utilitarians and "suffering is bad" utilitarians. The former judge actions by asking, "Who benefits from this?" The latter judge them by asking, "Whose suffering is alleviated by this?"

If you're a "happiness is good" utilitarian (and full disclosure, I am), if you think about someone who otherwise wouldn't exist getting an existence which is good enough to be better than non-existence, you see the benefit they get from existing, and it's clearly better than non-existence, so they're better off. But if you're a "suffering is bad" utilitarian, you see someone who wasn't suffering from not existing (because if you don't exist, how do you suffer?), so you don't see what suffering is alleviated by them getting to exist.

Expand full comment