Tag Archives: Reasoning

Judgement and Asymmetric Errors

I recently read (most of) Epistemology and the Psychology of Human Judgement.

It’s a good book. Well written and well reasoned. I stopped reading it mostly because I’m not in the target audience – it’s largely a polemic against what they call Standard Analytic Epistemology, which isn’t a viewpoint I can even imagine holding, let alone hold, so I’m not really needing convincing.

It does however have some interesting discussion of reasoning strategy. Unfortunately I think much of that discussion falls prey to a very common flaw that not enough people are aware of: Not all errors are created equal.

I shall elaborate.

One of the points the book makes (and which I’ve heard elsewhere) is regarding statistical prediction rules. The basic idea is this: In many cases, simple statistical prediction rules (SPR) have a lower error rate than human experts. This is often the case even when you give the experts the result of the SPR and allow them to selectively defect. Therefore in these cases you should always follow the advice of the SPR and not allow the human expert to intervene.

Seems sensible enough, right? How could you possibly argue that a system which has more errors is better?

Right?

Wrong!

The thing is, the error rate is actually an utterly useless number in most problems: What you’re actually interested in are the rates of specific types of errors.

Consider the case of triaging patients coming in with possible heart attacks. You do an initial triage, and everyone you think might be having a heart attack is passed on for treatment. Everyone who isn’t has to twiddle their thumbs for a while. This is a classic case where SPRs do better than humans.

What’s the interesting feature here? Well, you really want to make sure that you don’t leave anyone having a heart attack twiddling their thumbs. That would be bad. If you let a patient or two through who aren’t actually going to die any time soon, not that big a deal.

So the desirable solution is to treat everyone! Of course, that’s not so practical: You only have a limited amount of resources. You can’t deal with everyone. That’s why you’re triaging in the first place.

But suppose the result of the SPR is leaving you a bit under capacity – not a lot, but say you could handle another 20 or 30 patients without seriously impacting your ability to handle the current ones. What to do?

Simple. Let the expert pick those 20 or 30 people from amongst the people that the SPR told to twiddle.

This cannot decrease the rate of false negatives, no matter how bad the expert’s reasoning strategy is: That is, everyone who would previously have been seen under this rule is still seen. So no heart attack patients that would previously have got in will fail to get in under the new strategy. However some of those extra patients might actually be having a heart attack (assuming the SPR isn’t perfect). So this reasoning strategy is strictly better – it stays within the resource constraints and saves more lives – even if it has a higher error rate (it’s not obvious that it does, but given typical SPRs and typical experts I expect it usually will).

What we have here is a form of selective defection: You are given the results of the SPR and allowed to change them if you desire. The key feature is that we only allow selective defection in one direction. As long as that direction is the direction we care about most, and as long as this selective defection is bounded to not consume more resources than we have, it has to be an improvement.

Edit: A friend points out that SPRs are also used as a scoring mechanism: Rather than having them be binary yes/no you instead use them to rank the candidates and fill up to capacity. I think the point about asymmetric errors still holds, but certainly the above formalism doesn’t. I’ll have to think about it.

This entry was posted in Uncategorized and tagged , , on by .