If you enjoy my writing and are inclined to do so, please buy me a coffee, or:
This is the final short piece in my little war with effective “altruists” and utilitarians who have come to hate nature, and wish for it’s destruction—but only out of compassion, you see.
These are people who see nature only from a distance, through their tv or computer, crying like my girlfriend does when she sees a cute gazelle get eaten by a lion in a nature documentary.
Their weakness and hyper-sensitivity drives them to a whole host of pathological, essentially psychopathic beliefs about what we should do to nature to prevent animal suffering. Like a misspecified AI which, once instructed to help end suffering, concludes that it should pave over nature, turning it into a giant cement parking lot, all to prevent animals from ever being born and thus preventing suffering.
Yes, these people actually support that idea. Read my other pieces for an overview:
First,
then,
"Total suffering" is not real (against utilitarians part 3)
The point of this final short essay is to point out that the whole utilitarian calculus is based on a fallacy.
Dust Specks and Torture
Perhaps the most illustrative example of how twisted up the utilitarians can get in their moral calculus is the “torture vs dust specks” debacle, in which a LessWrong forum of hundreds of incredibly intelligent people got all tied in knots (for literally years) trying to decide whether it was worse for one person to be horrifically tortured for 50 years without rest, or for some unfathomably large number of multiple billions of people to get a mildly annoying dust speck in their eye.
Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
I think the answer is obvious. How about you?
They literally couldn’t decide which they would choose (well, actually, many did choose—and they chose the torture)… because something something, “total suffering”.
Many of them argued—following strict utilitarian reasoning—that a vast number of minor harms eventually outweighs a single severe harm, and that the sheer unbalanced numbers and total suffering would drive them to condemn that poor soul to fifty years of torture to prevent all those poor people from the dust speck—thereby minimizing some abstract concept of total suffering.
Such fallacies
But the whole project of calculating and comparing total utility by trying to aggregate harms across people is completely fallacious.
Basically: suffering is not and cannot be experienced cumulatively. This is because of the non-aggregative nature of individual experience.
Suffering Is Subjective, Individual
Suffering is an irreducibly individual experience. No person feels someone else’s suffering. Each individual feels only their own experience in full, but does not and cannot experience anyone else's.
Even if a billion people all feel a dust speck in the eye, this merely results in a billion tiny, isolated instances of trivial discomfort, and none of it is compounded or intensified by the fact that others are also suffering. It is not metaphysically or phenomenologically valid to say this adds up to some instance of grave suffering.
One person missing lunch a billion times is starvation. A billion people each missing one lunch is just a billion individual mild hungers—not starvation.
There Is No Subjective “Total Pain”
There is no entity that experiences “the total sum” of pain across individuals. Without a collective consciousness, there's no meaningful sense in which “3^^^3 dust specks” are experienced as a total pain.
The only morally relevant unit of suffering is the individual.
Thus, no matter how many trivial harms occur, there is no subject to experience the supposed aggregated harm of them. Suffering only matters to the degree that is actually experienced by something. And suffering is only experienced by individuals.
No one out there experiences some cumulative total suffering. It is not a level of suffering ever actually experienced by anyone. It does not exist.
Moral Value Is Tied to the Intensity and Type of Suffering
Ethically, we respond not just to the quantity of suffering but its quality—its depth and duration, and its cause. In the LessWrong crowd’s example of the dust specks and torture, they even admit that the dust is only an inadvertent mild annoyance for each person, that it barely registers in consciousness. Whereas, the fifty years of torture is an intentional decision to victimize, one that destroys the person’s identity, warps perception, and consumes the person's life entirely.
This introduces an asymmetry: the moral salience of torture is not just greater in degree but categorically different. The dust specks shouldn’t even register as morally urgent, they're just background noise in the moral calculus.
We shouldn’t regard 1,000,000 paper cuts as morally equivalent to one person having their hands cut off. Even though one could tally "more pain events” or more total suffering, we intuitively know they aren't in the same moral universe.
Aggregating Harms Risks Absurdity
If we allow arbitrarily large numbers of tiny harms to "add up" to a great harm, we risk getting dragged into paradoxes, moral absurdity, and unbelievably repugnant conclusions.
Living in this abstract moral universe of thought experiments about some hypothetically flawless ability to count, quantify, and aggregate harms, distributed across absurd numbers, leads us to radically overweight the small hypothetical harms of many individuals (like the suffering of billions of insects) and treat them as more important than great, real harms experienced by actual individuals.
This leads to untenable decisions, like accepting active torture or murder as morally preferable to some enormous number of people experiencing some inadvertent annoyance. Or that people should neglect other charities and give their money to support ludicrous attempts to prevent insect suffering.
Justice and Dignity Demand Non-Aggregation
Moral decisions must respect the integrity of individuals, not treat them as fungible units in an equation, sacrificing them to minimize a hypothetical, non-existent sum value. Aggregative reasoning treats people as vessels for utility or disutility, rather than as ends in themselves.
By refusing to equate mass trivial harms with individual catastrophic ones, we uphold the principle that individuals have value, and that some harms are sacredly bad and cannot be justified, regardless of the “math”.
So…
Only individuals feel pain. Suffering does not sum across persons because there is no shared ledger of pain, only a world of individual minds each experiencing their own—and only their own—pain or pleasure.
“Total suffering” is not real, and is not experienced by anyone.
No amount of mild discomforts, multiplied by billions, should ever outweigh the real, great suffering of individuals.
To claim otherwise is to mistake numerical abstraction for moral reality, and to ignore the value of the individual and ignore that suffering differs in quality and quantity.
Hypothetical Navel Gazing
The publication Brackish Waters, Barren Soil recently posted an essay titled “Why I’m not a rationalist”.
He echoed my assertion that it’s no coincidence that utilitarians are almost always nerdy types whose desire to end suffering and pave over nature likely arises at least partly out of their own weakness and neuroticism, and lack of experience with human life, nature, and suffering. They spend all their time interacting with life through screens, spreadsheets, and thought experiments. He says they are disembodied:
Now I don’t want to be mean (well, not in this post at least) — but as I continued to interact with the rationalist community, I found that many of them have a worldview that is quite, shall we say, disembodied.
I’ve noticed that certain domains of life that are entirely missing — things like exercise, sports, and to a lesser extent, traditional dating and relationships.
The common thing about these domains is that you can’t really build a sophisticated model of how they work. You can read all the dating and fitness books, and go to seminars, and learn all of the theory — but it’s another thing entirely to apply that knowledge.
And it shows; when you look at some of the prominent rationalists, there’s a commonality in, ah, how they present themselves.
He points out that utilitarian philosophers have “spreadsheet brain”, and end up down abstract rabbit holes of hypothetical navel gazing, and then use these weird thought experiments as if they have any bearing whatsoever on reality.
many of them have “spreadsheet brain” on steroids; the idea that anything can be solved if we simply do the proper feature extraction, data cleansing, and gradient boosting. It’s the idea that, with enough python frameworks, tensor operations, and first principles thinking, you can find the optimal solution for everything — right from the comfort of your own desk.
…
First there were the asinine attempts at trying to quantify certain values and properties of conscious existence; for example, attempting to figure out how many bugs are worth a human life, or how many shrimps saved is worth a bed net in Africa. Ultimately the exercises were trolley problems on crack, and there are so many different implicit assumptions that the actual implementation became unworkable.
…
Many hypotheticals are like this: trolley problems, drowning children, repugnant conclusions, etc. After a certain point, a lot of the pontifications become so far removed from any practical considerations that any lessons learned become meaningless for the real world.
One day we may flee before the compassionate utilitarians
I have said it before—it’s bad news for us that these utilitarian types run silicon valley and the AI startups and advanced biotech, because, should they hold the reins of some future super-technology, they might end up making some absolutely crazy tradeoffs with all of our lives based on their shoddy abstract moral calculations. Only out of compassion, of course, and to minimize “total suffering”—a suffering that no one even experiences.
C.S. Lewis said it better
Unfortunately, just like everything else I write, this has all been a waste of time, because some more intelligent and more articulate person has beaten me to all of these arguments:
We must never make the problem of pain worse than it is by vague talk about the “unimaginable sum of human misery”. Suppose that I have a toothache of intensity x: and suppose that you, who are seated beside me, also begin to have a toothache of intensity x. You may, if you choose, say that the total amount of pain in the room is now 2x. But you must remember that no one is suffering 2x: search all time and all space and you will not find that composite pain in anyone’s consciousness. There is no such thing as a sum of suffering, for no one suffers it. When we have reached the maximum that a single person can suffer, we have, no doubt, reached something very horrible, but we have reached all the suffering there ever can be in the universe. The addition of a million fellow-sufferers adds no more pain.
—C.S. Lewis, The Problem of Pain (pp. 203)
Okay, that’s it. I am done with them now. I’ll get back to essays on social science, viewpoint diversity, or something.
Or maybe I’ll write about religion… I am currently having something of a crisis of faith in materialism and scientism.
Until then:
Also, if you enjoy my writing and are inclined to do so, please consider giving me a tip!
Reign of Error: The failure of social science.
I include a lot of examples below, very often in the form of quotes and screenshots, to save time paraphrasing and rewriting. They go uncited, but there is a reference list at the end of the article. If you want to know which article a specific quote or screenshot came from, feel free to ask in the comments.
I feel like you're missing something here, or maybe I am.
It's true that the dust in the eye of multiple people doesn't add up to more suffering. But society is a series of such dilemmas, and the outcomes of those separate dilemma *do* add up within individuals.
For example, let's say you had a policy that took away 1 billion people's next meal in order to save one particular human from a painful death. Of course we should all give up lunch to save the life right? But if you have to repeat that same decision 100 times today, you actually can't save the 100 lives, because then 1 billion people will starve to death.
Trying to quantify the total positive or negative utility across the entire population is a way to try to make a series of decisions that lead to the best overall outcome.
"The addition of a million fellow-sufferers adds no more pain" seems like at least as radical and paradoxical of a stance as utilitarianism. Are we really willing to claim that the magnitude of the tragedy of WWII is only measured in the misery of the most wretched individual sufferer, and not in the fact that the misery was shared by millions?