Large scale utilitarianism and dust motes

Content note: Some dispassionate discussion of torture due to source material. No graphic descriptions. Some discussion of murder, mediated by various classic ethical dilemmas around trolleys.

Epistemic status: I think this is right, but I’m not sure it results in useful conclusions. At any rate, this was interesting for me to think about.

I’d like to talk about a thought experiment which comes from Less Wrong (of which I am not a member, but am an occasionally interested reader). Torture vs Dust Specks. There is also Sublimity vs Youtube, which is intended to be a less polarizing framing. In this post I’m going to abstract away slightly and refer to suffering vs inconvenience.

The experiment is this: Let N be some unimaginably huge number. It’s chosen to be 3^^^3 in the original post, but for our purposes it’s sufficient that N be significantly greater than the number of atoms in the universe. You may choose between two options. In the negative version, one person suffers horribly for an extended period of time, or each of N people experience a tiny momentary inconvenience. In the positive version, one person gets to experience a life of supreme bliss and fulfilment, or each of N people experience about a second of moderate amusement and contentment. Which of these options do you choose?

What this experiment is supposed to do is point out a consequence of additive Utilitarianism with real valued scores. Irritation/contentment has a non-zero but small utility (negative in one case, positive in the other), whileas suffering/sublimity has a large non-zero utility, but not N times as large. Therefore by “shutting up and multiplying” it’s clearly better to have the large number of small utilities because they add up to a vastly bigger number. So you should respectively choose individual suffering as the lesser evil and mild contentment as the greater good.

I don’t generally agree with this sort of additive utilitarianism and I’ve previously considered this result… not necessarily wrong, but suspicious. Sufficiently far from the realm of possible experience that you can’t really draw any useful conclusions from it. Still, my moral intuitions are for preferring irritation over suffering, and I don’t really have a strong moral intuition for contentment vs sublimity but lean vaguely in the direction of contentment.

I recently had a mental reframing of the concept that has actually caused me to agree with the utilitarian answer: You should clearly choose contentment and suffering respectively.

The reframing is probably obvious if you’re a decision theorist and believe in things like Von Neumann-Morgenstern utility functions, and if you’re such a person you’ll think I’m just doing a proof from the axioms. I’m not such a person, but in this case I think the formulation is revealing.

The reframing is this: The natural interpretation of this question is in terms of “Would you cause this specific person to suffer to prevent the dustmotepocalpyse?”. This is essentially the fat man version of the trolley problem. It personalizes it. The correct formulation, which from a utilitarian point of view is ethically equivalent, is that a randomly chosen individual amongst these N will be  caused to suffer.

For me this becomes much simpler to reason about.

First, lets consider another reformulation: Instead of having a guaranteed individual amongst the N who suffers, your choice is that either each individual gets a dust mote or each individual has a probability \(\frac{1}{N}\) of suffering.

These are not exactly equivalent: In this case the number of people suffering follows a Poisson distribution (technically it’s not exactly a Poisson distribution, but it’s close enough that no physically possible experiment can discern them). However I find I am basically indifferent between them. The expected amount of suffering is the same, and the variance isn’t large enough that I think it matters. I’m prepared to say these are as good as morally equivalent (certainly they are in the utilitarian formulation).

And this now has decoupled each of the N people and we can reduce it to a decision about one person.

So, on the individual level, which do you choose? A \(\frac{1}{N}\) chance of suffering or a tiny inconvenience?

I argue that choosing the chance of suffering is basically the only reasonable conclusion and that if you would choose otherwise then you don’t understand how large N is.

N is so large that if I were to give you the option to replace every dust mote equivalent piece of annoyance with a \(\frac{1}{N}\) chance of suffering then your chances of dying of a heart attack just before being struck by an asteroid landing directly on your head are still greater than this chance of suffering ever coming to pass. On any practical level your choice is “Would you rather have this mild inconvenience or not?” If you have ever made a choice for convenience over safety then you cannot legitimately claim that this is not the decision you should make.

So if you gave me the opportunity to intervene in someone’s life and replace any amount of minor inconveniences with this negligible chance of suffering, the moral thing to do is obviously to take it.

And similarly if I can do this for each of N people the moral thing to do is still to take it. Even given the statistical knowledge that this will result in a couple of people suffering out of the N, the fact that it is obviously the correct choice for any individual and that there is no significant interaction between the effects (the chance that anyone you know gets the bad option is still statistically indistinguishable from zero).

One of the problems with deriving general lessons here is that I don’t think this tracks the sort of decisions of this shape that one actually makes in practice: It’s not usually the case that when you’re choosing whether k people should suffer to prevent inconvenience to N – k that N is indescribably huge or the k are chosen uniformly at random. It tends to be more that the k people are some specific subgroup, often one who will be picked as convenient to persecute over and over again. Also it turns out that there aren’t more people than atoms in the universe, so in practice the chances are not nearly so minuscule and it’s less likely that every reasonable person should decide the same way. So as usual I think that the elided details of our idealized thought experiment turn out to be the important ones.

Still, it’s interesting that when I worked through the details of the VNM + utilitarian argument I found I agreed with the conclusion. I still don’t regard them as a general source of ethical truth, but you can broadly apply similar reasoning here for a lot of large scale systems design, so it has made me at least more inclined to pay attention to what it has to say on the subject.

This entry was posted in Uncategorized on by .