Tag Archives: philosophy

A sketchy and overly simplistic theory of moral change

This is another post inspired by a conversation with Paul Crowley.

Up front warning: The morality described herein is a very hippy left wing morality. If you subscribe to any form of consequentialism you’re probably going to at least find it compatible with your own. If you think Some Victimless Crimes Are Just Plain Wrong Dammit you’re probably not. Or rather, you may agree with most of what I have to say but think there are other highly important things too.

I hate trolley problems.

Or rather, I think peoples’ responses to trolley problems are an interesting thing to study empirically. I just think they’re a lousy way to approach morality.

Why? Well, fundamentally, I don’t think most failures of morality are failures of moral reasoning. I think morality is fundamentally much less complex than we want to believe it to be, and I think most reasonable moral commandments can reasonably be summed up as “You’re hurting people. Do less of that”.

That’s not to say that this is the be all and the end all of morality, or that there are no tricky moral dilemmas. Obviously it’s not and there are. I just think that they are tricky because they are unusual, and that most failures of morality happen long before we reach anything that complicated, and simply boil down to the fact that you are hurting people and should do less of that. I also think that trying to convince ourself that morality is a complex thing which we don’t understand is more of an excuse to fail to act morally (“Look! It’s hard! What would you do with this trolley?!”) than it is a true attempt to understand how we should act.

If you honestly find yourself in a situation where the rule doesn’t apply, then apply your complicated theory of moral philosophy. In the meantime: You’re hurting people. Do less of that.

Generally speaking, I feel people are pretty good at understanding this rule, and that if they don’t understand this rule then it is very unlikely that after a period of careful reflection and self-contemplation they will go “Oh! Right! I’m being a bad person. I should not do that, huh?”. A carefully argued case for why they should be a good person is also rather unlikely to work.

And yet people can clearly change their morality and become better people. If not individuals, at least societies can – many things we once did we now recognise as morally awful. Many things we currently do the next generation will probably recognise as awful.

So given that I believe self-reflection and argument don’t work, what does actually work?

I think most moral failings boil down to three basic issues:

  1. I don’t understand that I am hurting people
  2. I don’t believe that I am hurting people
  3. I don’t care that I am hurting people

And I think there is a fairly easy litmus test to see which of the three categories you find yourself in.

If someone says “When you do X, it hurts me because Y”, how do you respond?

If you say “Oh, shit. Sorry. I had no idea. I’ll stop doing X then!” then you did not understand.

If you say “Yeah, right. You obviously made that up” then you do not believe you are hurting people.

If you say “Oh well. I’m going to keep doing X” then you do not care that you are hurting people.

Let me set something straight right now: These are all acceptable answers.

I’ll take it as read that an apology and a promise to do better is acceptable.

“When you support gay rights, it disrupts my connection to god and makes my inner angel cry” – “Yeah, right”

“When you support the government taxing me, it makes me sad” “Oh well. I’m going to keep supporting the government taxing you”

I don’t intend to defend these points. Only to point out that these are cases where I will react that way, and I think it is OK to do so.

The interesting thing about these three is that the forces which change them are all different.

In particular, only the first is amenable to reason. You can present evidence, you can present arguments, and at the end of it they will have a new understanding of the world and realise that their previous behaviour hurt people and hopefully will fix it. This is what I referred to previously as the moral argument for rationality.

How do you change the third? In a word: diversity. You know that thing that sometimes happens where some politician’s child comes out as gay and all of a sudden they’re all “Oh, right! Gays are people!” and they about face on gay marriage? That’s moral change brought about by a change of caring. Suddenly the group of people you are hurting has a human face and you actually have to care about them.

How do you affect change of belief? I don’t know. From the inside, my approach is to simply bias towards believing people. I’m not saying I always believe people when they say I’m hurting them (I pretty much apply a “No, you’re just being a bit of a dick and exploiting the rules I’ve precommitted to” get out clause for all rules of social interaction), but I’m far more likely to than not. From the outside? I think it’s much the same as caring: People will believe when people they have reason to trust put forth the argument.

In short, I believe that arguments don’t change morals, people do, and I think that sitting around contemplating trolley problems will achieve much less moral change than exposing yourself to a variety of different people and seeing how your actions affect them.

This entry was posted in rambling nonsense and tagged , on by .

Judgement and Asymmetric Errors

I recently read (most of) Epistemology and the Psychology of Human Judgement.

It’s a good book. Well written and well reasoned. I stopped reading it mostly because I’m not in the target audience – it’s largely a polemic against what they call Standard Analytic Epistemology, which isn’t a viewpoint I can even imagine holding, let alone hold, so I’m not really needing convincing.

It does however have some interesting discussion of reasoning strategy. Unfortunately I think much of that discussion falls prey to a very common flaw that not enough people are aware of: Not all errors are created equal.

I shall elaborate.

One of the points the book makes (and which I’ve heard elsewhere) is regarding statistical prediction rules. The basic idea is this: In many cases, simple statistical prediction rules (SPR) have a lower error rate than human experts. This is often the case even when you give the experts the result of the SPR and allow them to selectively defect. Therefore in these cases you should always follow the advice of the SPR and not allow the human expert to intervene.

Seems sensible enough, right? How could you possibly argue that a system which has more errors is better?

Right?

Wrong!

The thing is, the error rate is actually an utterly useless number in most problems: What you’re actually interested in are the rates of specific types of errors.

Consider the case of triaging patients coming in with possible heart attacks. You do an initial triage, and everyone you think might be having a heart attack is passed on for treatment. Everyone who isn’t has to twiddle their thumbs for a while. This is a classic case where SPRs do better than humans.

What’s the interesting feature here? Well, you really want to make sure that you don’t leave anyone having a heart attack twiddling their thumbs. That would be bad. If you let a patient or two through who aren’t actually going to die any time soon, not that big a deal.

So the desirable solution is to treat everyone! Of course, that’s not so practical: You only have a limited amount of resources. You can’t deal with everyone. That’s why you’re triaging in the first place.

But suppose the result of the SPR is leaving you a bit under capacity – not a lot, but say you could handle another 20 or 30 patients without seriously impacting your ability to handle the current ones. What to do?

Simple. Let the expert pick those 20 or 30 people from amongst the people that the SPR told to twiddle.

This cannot decrease the rate of false negatives, no matter how bad the expert’s reasoning strategy is: That is, everyone who would previously have been seen under this rule is still seen. So no heart attack patients that would previously have got in will fail to get in under the new strategy. However some of those extra patients might actually be having a heart attack (assuming the SPR isn’t perfect). So this reasoning strategy is strictly better – it stays within the resource constraints and saves more lives – even if it has a higher error rate (it’s not obvious that it does, but given typical SPRs and typical experts I expect it usually will).

What we have here is a form of selective defection: You are given the results of the SPR and allowed to change them if you desire. The key feature is that we only allow selective defection in one direction. As long as that direction is the direction we care about most, and as long as this selective defection is bounded to not consume more resources than we have, it has to be an improvement.

Edit: A friend points out that SPRs are also used as a scoring mechanism: Rather than having them be binary yes/no you instead use them to rank the candidates and fill up to capacity. I think the point about asymmetric errors still holds, but certainly the above formalism doesn’t. I’ll have to think about it.

This entry was posted in Uncategorized and tagged , , on by .

Axioms, definitions and agreement

A while ago I posted A Problem of Language, a response to an article claiming that Scala was not a functional language. This isn’t an attempt to revive that argument (and please don’t respond to it with such attempts. I’m likely to ignore or delete comments on the question of whether Scala is a functional language). It’s a post which is barely about programming, except by example. Really it’s a post about the philosophy of arguments.

My point was basically that without a definition of “functional language” (which no one had provided) it was a meaningless assertion to make.

Unfortunately this point isn’t really true. I think I knew that at the time of writing but glossed over it to avoid muddying the waters, as it’s false in a way that doesn’t detract from the basic point of the article, but it’s been bugging me slightly so I thought I’d elaborate on the point and the basic ideas.

Let’s start with what’s hopefully an unambiguous statement:

Brainfuck is not a functional language

Hopefully no one wants to argue the point. :-)

Well, why is brainfuck not a functional language? It doesn’t have functions!

So, we’re making the following claim:

A functional language must have a notion of function

(in order to make this fully formal you’d probably have to assert some more properties functions have to satisfy. I can’t be bothered to do that).

Hopefully this claim is uncontroversial.

But what have we done here? We’ve, based on commonly agreed statements, proved that Brainfuck is not functional without having defined “functional programming language”. i.e. my claim that you need a definition in order to meaningfully claim that a language is not functional is false.

What you need in order to make this claim is a necessary condition for the language to be functional. Then on showing that condition does not hold you have demonstrated the dysfunctionality of the language.

But how do we arrive at necessary conditions without a definition? Well, we simply assert them to be true and hope that people agree. If they do agree, we’ve achieved a basis on which we can conduct an argument. If they don’t agree, we need to try harder.

A lot of moral arguments come down to this sort of thing. Without wanting to get into details, things like arguments over abortion or homosexuality frequently come down to arguments over a basic tenet: Do you consider a fetus to be of equal value to a human life, do you consider homosexuality to be inherently wrong, etc. (what I said about arguments RE Scala holds in spades for arguments on these subjects). It’s very rare for one side to convince the other of anything by reasoned argument, because in order to construct a reasoned argument you have to find a point of agreement from which to argue and that point of agreement just isn’t there.

Mathematically speaking, what we’re talking about is an Axiom. Wikipedia says:

In traditional logic, an axiom or postulate is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths.

I consider this definition to be true, but perhaps a bit obfuscated. I’d like to propose the following definition. It’s overly informal, but I find it’s a better way to think about it:

An axiom is a point which we agree to consider true without further discussion as a basis for arriving at an agreement.

(This may give the hardcore formalists a bit of a fit. If so, I apologise. :-) It is intended to be formalist more in spirit than letter )

The most important part of this is that axioms are social tools. They don’t have any sort of deeper truth or meaning, they’re just there to form a basis for the discussion.

This entry was posted in Numbers are hard, rambling nonsense and tagged , on by .