Memetic Hazard Warning: Philosophy of Ethics
I remain skeptical of trolley problems as a useful proving ground for ethical theories, but one popped into my head earlier in the context of some other things that I am sufficiently annoyed at my inability to resolve that I’m thinking about it anyway.
Your classic trolley problem, a runaway trolley will kill five people if it continues on its current path, but you can divert it onto another path where it will kill a different person. The question posed is whether you pull the lever?
I believe that I would pull the lever, although I do not necessarily believe my ability to predict my actual actions in these circumstances (especially given how weird they are), but I also think that I would be doing something bad by doing so. It may be the most ethical action available to me, but it still is one that causes significant harm.
I’ve previously proposed that the correct solution to the trolley problem is that you should pull the lever and then turn yourself in for murder (modulo the obvious problems with the police as an institution). I still think this is probably true. I’m still thinking through the details, but I do think the idea of there being an obligation to provide reparations of some sort when you have harmed someone is an important one.
But consider the trolley problem with a twist. Instead of the system as posed above, six people stand before you. If you do not pull the lever, five randomly selected (and unknown in advance) of these six people will die. If you do, one randomly selected (similarly unknown) person will die. Do you pull the lever?
Unlike the classic trolley problem, I do not believe there is any possible ethical justification for not pulling the lever. I am prepared to entertain arguments to the contrary, but my current belief is that the only possible reasons for not pulling the lever (assuming ability to do so) are confusion about the nature of the problem or being a bad person.
But… why are these intuitions so different? Especially considering that the random choices may have been chosen in advance and simply be unknown to you, at which point it is in some sense identical to the trolley problem.
Additionally, if we accept that you have done harm and have an obligation in the case of the trolley problem, why is that not true here? Reflective equilibrium is an untrustworthy guiding principle, but I simply cannot come up with an intuition that allows me to believe that by pulling the lever in the randomized case you have done something wrong, so where does the obligation come from?
I think there are two things going on here, one interesting and one not:
- Trolley problems are idealised away from real world ethical decision making in a way that breaks our intuition here – we’ve essentially walled off the decisions leading up to this point, which hides any possible source of obligation – so the differences in this example while an interesting test of ethical intuition are not actually a good ethical guiding principle (maybe more on this in a future post).
- At an intuitive level we treat statistical lives and actual lives as distinct things. I am not sure we are wrong to do so. Statistical lives (as in the random example) are hidden behind a Rawlsian veil of ignorance which allows them to be interchangeable in a way that individual lives are not.
I’m not sure how I feel about this and I suspect that the intuitvie difference between statistical and actual lives might not just be wrong but actively dangerous – treating people as numbers is a great first step towards treating them as things, so if anything our sense of obligation should increase when we do so in order to counterbalance that.
I do not yet have a conclusion, but I’m going to be thinking about this further.