This is another post inspired by a conversation with Paul Crowley.
Up front warning: The morality described herein is a very hippy left wing morality. If you subscribe to any form of consequentialism you’re probably going to at least find it compatible with your own. If you think Some Victimless Crimes Are Just Plain Wrong Dammit you’re probably not. Or rather, you may agree with most of what I have to say but think there are other highly important things too.
I hate trolley problems.
Or rather, I think peoples’ responses to trolley problems are an interesting thing to study empirically. I just think they’re a lousy way to approach morality.
Why? Well, fundamentally, I don’t think most failures of morality are failures of moral reasoning. I think morality is fundamentally much less complex than we want to believe it to be, and I think most reasonable moral commandments can reasonably be summed up as “You’re hurting people. Do less of that”.
That’s not to say that this is the be all and the end all of morality, or that there are no tricky moral dilemmas. Obviously it’s not and there are. I just think that they are tricky because they are unusual, and that most failures of morality happen long before we reach anything that complicated, and simply boil down to the fact that you are hurting people and should do less of that. I also think that trying to convince ourself that morality is a complex thing which we don’t understand is more of an excuse to fail to act morally (“Look! It’s hard! What would you do with this trolley?!”) than it is a true attempt to understand how we should act.
If you honestly find yourself in a situation where the rule doesn’t apply, then apply your complicated theory of moral philosophy. In the meantime: You’re hurting people. Do less of that.
Generally speaking, I feel people are pretty good at understanding this rule, and that if they don’t understand this rule then it is very unlikely that after a period of careful reflection and self-contemplation they will go “Oh! Right! I’m being a bad person. I should not do that, huh?”. A carefully argued case for why they should be a good person is also rather unlikely to work.
And yet people can clearly change their morality and become better people. If not individuals, at least societies can – many things we once did we now recognise as morally awful. Many things we currently do the next generation will probably recognise as awful.
So given that I believe self-reflection and argument don’t work, what does actually work?
I think most moral failings boil down to three basic issues:
- I don’t understand that I am hurting people
- I don’t believe that I am hurting people
- I don’t care that I am hurting people
And I think there is a fairly easy litmus test to see which of the three categories you find yourself in.
If someone says “When you do X, it hurts me because Y”, how do you respond?
If you say “Oh, shit. Sorry. I had no idea. I’ll stop doing X then!” then you did not understand.
If you say “Yeah, right. You obviously made that up” then you do not believe you are hurting people.
If you say “Oh well. I’m going to keep doing X” then you do not care that you are hurting people.
Let me set something straight right now: These are all acceptable answers.
I’ll take it as read that an apology and a promise to do better is acceptable.
“When you support gay rights, it disrupts my connection to god and makes my inner angel cry” – “Yeah, right”
“When you support the government taxing me, it makes me sad” “Oh well. I’m going to keep supporting the government taxing you”
I don’t intend to defend these points. Only to point out that these are cases where I will react that way, and I think it is OK to do so.
The interesting thing about these three is that the forces which change them are all different.
In particular, only the first is amenable to reason. You can present evidence, you can present arguments, and at the end of it they will have a new understanding of the world and realise that their previous behaviour hurt people and hopefully will fix it. This is what I referred to previously as the moral argument for rationality.
How do you change the third? In a word: diversity. You know that thing that sometimes happens where some politician’s child comes out as gay and all of a sudden they’re all “Oh, right! Gays are people!” and they about face on gay marriage? That’s moral change brought about by a change of caring. Suddenly the group of people you are hurting has a human face and you actually have to care about them.
How do you affect change of belief? I don’t know. From the inside, my approach is to simply bias towards believing people. I’m not saying I always believe people when they say I’m hurting them (I pretty much apply a “No, you’re just being a bit of a dick and exploiting the rules I’ve precommitted to” get out clause for all rules of social interaction), but I’m far more likely to than not. From the outside? I think it’s much the same as caring: People will believe when people they have reason to trust put forth the argument.
In short, I believe that arguments don’t change morals, people do, and I think that sitting around contemplating trolley problems will achieve much less moral change than exposing yourself to a variety of different people and seeing how your actions affect them.