Misspecified utilitarians hate existence
Utilitarians' weird moral calculus, based in their inability to accept that suffering is part of life, leads them to, basically, a hatred of nature and existence.
“Imagine that humans had all left earth and we could destroy the world, killing every living thing painlessly. I would, in an instant, support doing so.”
- Bentham’s Bulldog
For those who don’t know, utilitarianism is a moral philosophy which asserts that the morally right action in any given situation is the one which produces the greatest happiness or pleasure or well-being for the greatest number of beings, regardless of what that action is. It is generally the favored philosophy of the effective altruists.
It stands in contrast with other moral philosophies like deontology, which asserts that certain actions are right or wrong regardless of their outcome. Or virtue ethics, which asserts that that which is right is that which cultivates, or arises from, some set of virtues—such as courage, honesty, or wisdom.
Utilitarianism is dressed as a “scientific” or “mathematical” moral system, which is why it so easily aligns with modern reductionist frameworks. It calculates the right-ness or wrong-ness of various actions by calculating the “utility” or pleasure-maximizing and pain-minimizing function of various actions. It is therefore the most thoroughly “modern” moral philosophy, in that it contains this veneer of objectivity and scientism. It casts away the outdated notions of virtue so offensive to modern “inclusive” sensibilities. It also does away with the seemingly indefensible moral-realism of deontology.
Basically, it purports to circumvent simple moral intuitions, for intelligent and objective analyses of “the good”.
People rarely acknowledge however, that it simply replaces other moral-realist positions (such as the inherent goodness of the virtues) for another—the inherent goodness of pleasure and the inherent evil of suffering. They rarely acknowledge that this position is equally as indefensible as any other morally realist position, it suffers from all the same weaknesses and more.
But the real problem is that this pleasure maximizing goal, this weird moral calculus disconnected from notions of virtue or deontology, leads utilitarians to all sorts of weird and psychopathic conclusions.
A succinct introductory example is the thought-experiment1 known as the transplant case:
In the scenario, a surgeon has five patients, each in need of a different organ to survive: one needs a heart, one a liver, one lungs, etc. There are no available donors, and all five will die soon. Just then, a healthy man comes in for a routine checkup. The surgeon realizes that if he kills this healthy man and harvests his organs, he could save the five dying patients.
A simple utilitarian calculus would lead one to conclude that the five lives outweigh the one, and thus “within the stripped-down thought experiment… the operation should be performed, and utility-wise, rightly so.”
To virtue ethicists and to deontologists, such a conclusion is likely immoral on the face of it, but not so to utilitarians.
Now, in fairness, many utilitarians wouldn’t actually support killing the one healthy guy to save the five for any of a number of purely utilitarian reasons, such as, doing so would reduce people’s willingness to get check-ups at the doctor, lest they be unwittingly harvested for their organs—which would increase total suffering in the long run.
But this shows you how quickly utilitarian calculus can end up in weird and crazy places, and you end up having to seriously debate the “utility” of murdering a random innocent.
But, when you mix utilitarian philosophy with a certain type of person, you get real craziness.
Some people are so sensitive, so emotional, so ridden with pathological empathy—they find reality, existence, and nature so confronting to their neurotic disposition—that they drastically over-weight suffering in their moral calculus, and thus they justify all sorts of misanthropic, anti-natural, nihilistic views in the name of their “morality”. All in the name of “preventing suffering”.
One culprit: Bentham’s Bulldog
Take Bentham’s Bulldog (named after Jeremy Bentham, one of the prominent utilitarians) hereafter referred to as BB.
BB is a great essayist—better than I—but his aversion to suffering (that is to say, his aversion to reality and to nature) is so strong that he says some crazy stuff that, as an outdoorsman and nature-lover, I find really insane.
BB is, very honorably and rightly, concerned about the needless cruelty of factory farming and other cases of animal cruelty. But rather than being rooted in realistic worldviews which admit a certain amount of suffering, and which attempt hold up and to balance the many competing moral values. BB’s utilitarianism is rooted in an absolutist claim that all suffering is inherently bad—and that suffering is more bad than pleasure is good.
This makes him take a very negative view, not just of modern farming practices, but of nature itself, and he concludes that wild animals’ lives are not worth living:
let us, for a moment, ignore the fate of the baby octopi and merely focus on the life of the adult octopi. We can ask whether her life was worthwhile. The answer, it would seem, is a no, so resounding, it should ring out for miles.
The octopus is constantly on the run from sharks who have evolved to kill her efficiently. Blending in doesn’t work, for they detect their pray by sense of smell alone. They can reach their noses into very small places—leaving the octopus with no good place to hide. Imagine the constant stress and fear of the octopus, constantly on the run from assailants, desiring to tear her limb from limb.
It’s very easy to regard states of affairs as desirable when one doesn’t have to experience them. Grass does, after all, look greener in another’s backyard. Yet consider whether this life is really worth living—constantly on the run from sharks. Surely none of us would want to live on the constant run from sharks. And yet this is the fate of our invertebrate protagonist, who is propagandistically depicted as having a generally worthwhile existence.
Or, more succinctly:
If insects screamed in volume proportional to their suffering, nothing could be heard over the cries of insects.
From these and other excerpts (which, the more I read them, have a strange air of obsessive schizotypy—that quote above, about insects screaming, sounds like the kind of thing I’ve heard those with drug-induced psychosis fixate on) we can reasonably discern that BB is likely quite an anxious or neurotic person (no judgement, I have actually battled clinical level anxiety for years). He lets his emotional reactions, his fear of being preyed upon, his personal aversion to pain and fear of violence, dominate his thinking, and he externalizes this onto animals.
In essay after essay, BB advances highly dubious arguments that presume to know how strongly animals suffer in comparison to humans, claiming that shrimp, for example, “suffer about 3.1% as intensely as humans… [or even] 19% as intensely as humans” and that, given there are billions and billions of shrimp, their suffering can be mathematically shown to be more important than human suffering, and thus you should make your charitable donations to shrimp welfare rather than to human-benefitting charities. This obviously ignores the fact that suffering cannot be experienced by a collective. Individuals suffer, not groups, so you simply can’t add the suffering together like that and then compare across groups.
Regardless, he repeatedly points out how many wild animals exist:
The sheer number of these animals is utterly staggering. There are probably around 10^18 arthropods—and these are very likely conscious. If we make the fairly conservative assumption that this is 10% of the number of arthropods that die in a year, then that means that about 300 billion arthropods die each second.
In fact, this is a considerable underestimate. Brian Tomasik a while ago estimated the number of wild animals in existence. While there are about 10^10 humans, wild animals are far more numerous. There are around 10 times that many birds, between 10 and 100 times as many mammals, and up to 10,000 times as many both of reptiles and amphibians.
And he then hyper-fixates on how much suffering occurs in nature:
Most animals in nature live relatively short lives of intense suffering.
He presumes to know that insects lives are a “hellish nightmare”, and that “trillions of animals are crying out in pain. At least tens of billions, and perhaps even trillions, of animals die painfully every single second. Nearly every creature who ever lived had a short hellish life”, and that “Life is hell for almost every conscious creature who has ever lived.” There is an adolescent, histrionic tone here.
Based on this, he repeatedly insinuates, or outright asserts, that nature should be destroyed:
Preserving the natural mechanism that keeps bringing animals into existence, only to kill them shortly thereafter, isn’t compassionate.
and
For a similar reason, I think that we should seek to reduce the extent of nature and the horrifying suffering it causes. If we in the future have the ability to reduce or mostly eliminate natural suffering, we should not hesitate to do so.
and, this is one of the more insane ones:
Imagine that humans had all left earth and we could destroy the world, killing every living thing painlessly. I would, in an instant, support doing so.
and
Many environmentalists seem to look upon a torture chamber for quadrillions of beings and then assume we shouldn’t destroy any of it because it looks pretty. This is a truly grave error.
and
If you came across a natural factory farm, perhaps run by a particularly resourceful duck, the fitting action would be burning it to the ground. Even if the factory farm looked pretty—provided you ignore the blood and cries of terror, of course—that wouldn’t merit its continued existence. For a similar reason, I think that we should seek to reduce the extent of nature and the horrifying suffering it causes.
and
If you’re concerned about wild animals, therefore, you should support paving over ecosystems.
He endorses other crazy takes from people like Brian Tomasik, who asserts that everyone should pave over their lawns and gardens, because lawns and gardens provide an environment for insects to live, and insects suffer too much:
Most insects die soon after birth, and their lives probably contain more suffering than happiness. If you have a lawn that you actively maintain, you should consider converting your grass to hard landscape materials like gravel, or to artificial turf, to reduce plant biomass and therefore insect suffering.
And, even worse, he seems hopeful that some super intelligent AI could be enlisted in this mission to eradicate suffering by eradicating nature—an absolutely terrifying proposition:
It looks not extremely unlikely that fairly soon advanced AI will develop that will enable us to do something about very advanced problems. An AI superintelligence boom may be in the making. If this is so, then ideally we’ll try to get concern about wild animal suffering talked about, so that when the AIs have this ability, they’ll be likely to take actions to majorly reduce wild animal suffering.
A One-sided Calculus
All of this, again, rests on the assumption that pain is inherently bad. He says:
"Pain and suffering are bad because they hurt. Headaches are bad not because the people who have them can do calculus but because they feel bad."
But as I said in a comment to BB, how do you prove this?
If I have a headache, I personally don't like it, but is it to be considered universally or existentially bad? What does that even mean? And what if the pain arises from some healing process—is it still bad? Or what if the pain arises from some other organism's activities, one who is living in my brain and who is receiving pleasure and life saving nutrients? Is it still bad? According to who? This calculus is difficult in this one simple situation—on a large scale it is impossible.
Additionally, despite his apparent desire to quantify and calculate highly subjective experiences, he never even attempts to mathematically quantify how much pleasure animals cumulatively feel. Nor the value of hedonically neutral experiences, and whether these can be considered on one side or the other of his utilitarian calculations.
He simply performs a biased and dubious calculus, rooted in his own emotionality and aversion to pain, in order to quantify how much animals suffer, and in order to justify his willingness to destroy nature in order to end that suffering.
But such conclusions that life and its suffering are not worthwhile would require you to first also quantify how much good animals feel as well, no? Maybe those animals do get great pleasure form being alive, from eating, from mating? Maybe their in built instincts for survival do indeed provide enough pleasure in their few moments alive that their few moments of suffering and death are justified or worthwhile from their perspective?
If such a simple calculus of 'total suffering' could be performed to quantify whether life is worth it, then surely more humans going through hardship would take their own lives. It seems that even for humans—who likely have the greatest capacity for suffering—the few and temporary moments of ordinary life, with little suffering or pleasure, are enough to justify living through the suffering. So why would it not be the same for animals?
BB is not interested in quantifying sensations when it comes to pleasure, nor in explaining how he can be sure that animals don't instinctively feel that the moments without suffering perfectly justify their lives in spite of the suffering, as humans have in even the worst conditions.
Most humans have likely lived lives of more pain than pleasure, yet still elected to live them.
Even Victor Frankl and others in Nazi concentration camps found a horrific life to be entirely worth living. Maybe animals would make the same choice (indeed, the animals which BB claims have "have much more pain than pleasure" still seem very intent on living those lives). So how does he know that animals simply view their lives in terms of pain—they might still be perfectly willing to live lives that aren't hedonically positive.
Well, in response to these comments he says that
“I don't think that animals like shrimp have higher-order evaluations of their lives. And I don't think it's good to create miserable creatures even if they want to live after being created. One can be mistaken about whether their life is worth living.”
But, his entire argument for taking animal suffering seriously is that they are capable of higher-order consciousness and therefore capable of intense suffering. The ability to truly suffer likely requires certain higher order evaluations, self-awareness, anxiety, etc., beyond mere physical pain.
Regardless, he then openly says that he doesn’t care whether those animals would consider their lives worthwhile, he only cares whether he perceives their lives to be worthwhile, and because he apparently couldn’t personally muster the courage to live through suffering, he believes that animals shouldn’t either, and that their desire to live could be “mistaken”.
The Misspecification of Utilitarianism
As described, the simplistic obsession with maximizing pleasure results in many utilitarians and effective altruists behaving in crazy ways, like a misspecified artificial intelligence.
A great summary of misspecification can be seen in this article:
The fundamental problem of specification is that “it is often difficult or infeasible to capture exactly what we want an agent to do, and as a result we frequently end up using imperfect but easily measured proxies.”3 Thus, in a famous example from 2016, researchers at OpenAI attempted to train a reinforcement learning agent to play the boat-racing video game CoastRunners, the goal of which is to finish a race quickly and ahead of other players.4 Instead of basing the AI agent’s reward function on how it placed in the race, however, the researchers used a proxy goal that was easier to implement and rewarded the agent for maximizing the number of points it scored.
The researchers mistakenly assumed that the agent would pursue this proxy goal by trying to complete the course quickly. Instead, the AI discovered that it could achieve a much higher score by refusing to complete the course and instead driving in tight circles in such a way as to repeatedly collect a series of power-ups while crashing into other boats and occasionally catching on fire.5 In other words, the design specification (“collect as many points as possible”) did not correspond well to the ideal specification (“win the race”), leading to a disastrous and unexpected revealed specification (crashing repeatedly and failing to finish the race).
AI’s programmed to be “reward maximizers”—in the same way that utilitarians are “net pleasure” maximisers—often develop misspecification issues and end up doing unexpected and harmful or crazy things, such as runaway maximizers, of which “The silly but canonical example is an AI with a reward function with a soft spot for office supplies, so it converts all matter in the universe into paperclips… Reward maximizers are always unstable. Even very simple reinforcement learning agents show very crazy specification behaviors.”
This is what happens to utilitarians, their blind pursuit of reward maximization (the maximization of net pleasure), with no balancing virtues or deontological morality, leads them to all sorts of misspecified, ani-natural, nihilistic conclusions, such as supporting destroying almost all life in order to minimize suffering.
One could imagine an AI, after being instructed to “come up with a way to improve the lives of animals, by decreasing suffering”, leaping to the maximally effective but insane conclusion to “destroy all life, and thus end suffering”. This is exactly the kind of misspecification displayed by people like BB.
Don’t get me wrong, most peoples’ morality is based on a biased and dubious calculus, too, but at the very least the other moral frameworks, like deontology and virtue ethics, have a sort of pro-life bias built in that is safer, a belief that life and nature has some sort of inherent value beyond mere pleasure—that even suffering can have value. These other moral systems are cybernetic minimizers. They seek to limit the harm one does directly, in a more confined sphere of influence, and thus they are less totalizing and less blindly self-confident than the utilitarians—a virtue ethicist or deontologist is less likely to decide to kill everyone for their own good than the maximizing utilitarians.
We better pray to god that these hypersensitive utilitarians and EA’s don’t take control of society or create the AI superintelligence. Their totalitarian and, in-practice, psychopathic behavior (though rooted in a misspecified compassion) could lead them to kill us all for our own good, or to destroy nature to end suffering.
As BB says, he would “in an instant, support doing so.”
Thank you all for reading! Please like or share the article, or leave a comment if you have thoughts. Additionally, feel free to buy me a coffee if you want to support me.
Even better, subscribe for free:
The most hypocritical social scientist
I have been stuck down a rabbit-hole reading much of Jost’s work lately (as seen by my X thread which proved surprisingly popular).
Though maybe not just a thought-experiment: https://www.nytimes.com/2008/02/27/us/27transplant.html?_r=2&ref=us&oref=slogin&oref=slogin
BB is great, but he does have pretty much every last hallmark of a supervillain. And I would argue that there is probably more wisdom in comic books than in all of LessWrong.
I have a hard time believing these people are for real, tbh