I’m in the process of reading The Alex Benedict series by Jack McDevitt. It’s quite good. Not amazing, but quite good. Our protagonist is an antiquities dealer in the far future. The far future is remarkably un post singularity and is indeed quite like the present only with star ships, AIs and human simulation, but setting that aside it’s an interesting vision of what living in a world with a lot of high tech history is like. Humanity has at the point in this series been space-faring for longer than we’ve currently had written language.
That’s not what this post is about though.
There’s an interesting instance of practical problem solving in the first book. What follows is a moderately spoilery and highly paraphrased version of what happens (it literally has no writing in common with the actual version, which is much better written and longer. This is a condensed version to illustrate the point I’m making):
Our protagonists, Alex and Chase, are on a derelict ship they have found after their own one has been destroyed. It’s old but broadly operational.
Chase: Enemy cruiser bearing down on us. Yikes.
Alex: OK. Lets head out of the gravity well and prepare to go to warp.
Chase: Um. No.
Alex: ?
Chase: This ship doesn’t have FTL. Do you remember the great big hole where the warp engines on this ship are supposed to be?
Alex: Yeah, but maybe they solved that.
Chase: ???
Alex: This ship had to have got here somehow. The computer is claiming the FTL is working. Therefore we should give some credence to the idea that this ship has magical FTL we don’t understand.
Chase: That is the most ridiculous thing I’ve ever heard.
Alex: Look. We’re completely hosed if it’s not true. These people will never let us live. Therefore if we don’t have FTL we’re dead. Therefore there’s no point worrying about that possibility, and we must proceed as if our ship has magical FTL the likes of which we know not.
Unsurprisingly, this being a novel, Alex and Chase get out of the gravity well after a dramatic scene or two and their ship does indeed turn out to have magical FTL powers.
Real life, sadly, is not a novel. In a more realistic scenario it is entirely likely that they would get out of the gravity well, the computer would say “Oh, yeah. Sorry, my bad. Software glitch. It’s totes not possible to go to warp because you don’t have any frickin engines”. At which point the enemy ship would fire the photon torpedoes (note: Actual in book terminology is way less Star Trek than I’m making it out to be) and reduce Alex and Chase to a thin smear of very annoyed hyper-charged particles.
But that’s OK.
Well, I mean, it’s not OK for Alex and Chase. They’re a bit dead.
But it’s OK in the sense that it does not in any way invalidate Alex’s reasoning strategy.
You see, it may look superficially like Alex is trying to answer the question “Do I have FTL capability?”. He has formed a hypothesis (“I have magical FTL capability despite my lack of warp engines”) and he is performing an experiment to test that hypothesis (“I will go out to warp range and press the big red button”).
This is not what Alex is trying to do. Alex is in fact trying to survive.
He does not have any convincing evidence that he has a warp drive. He has strong convincing evidence that he does not in fact. But if he doesn’t then there’s literally nothing he can do about it. There is no feasible solution to the survival problem in that case, so he doesn’t worry about it. He proceeds as if the thing he needs to survive is true and if not, well, he’s dead anyway. Such is life.
This is an extremely powerful reasoning strategy.
It holds true in cases other than certain death as well: In general, when considering what hypotheses to test and what possibilities to worry about, you should consider not just “How likely is this to be true?” but also “How likely is finding out this is true to be helpful?”
This is one of the reasons for Occam’s razor.
Occam’s razor is less a fundamental truth about the universe (there are some arguments in favour of it as such, but I’ve not found them terribly convincing) and more a pragmatic tool for problem solving.
Occam’s razor states that given two theories explaining the data equally well, you should prefer the simpler one.
Why? Well, because the simpler one is far more useful if it’s true. A simple theory is easier to make predictions with. A complex theory might be true, but if it is our life is quite difficult and that’s not very useful, so we should first rule out the more helpful possibilities.
There needs to be a balancing act here of course. If I have two options, one fairly unhelpful but likely and one extremely helpful but pretty unlikely, I should probably spend more time worrying about the former rather than the latter.
If I had to boil this down into a general maxim, I think it would be the following: Take actions which maximize your chance of success, not ones which maximize your chance of finding out the truth.
Sometimes these two paths coincide. Perhaps even often. Sometimes though, you can’t handle the truth, and you’re probably better off not worrying about those cases.
> Occam’s razor states that given two theories explaining the data equally well, you should prefer the simpler one.
> Why? Well, because the simpler one is far more useful if it’s true.
I disagree. I think the reason for using Occam’s razor is not because the simpler theory will be more useful if it were correct. If that were the case then we would need to be considering techniques like keeping both theories around and using both (or using whichever can be analyzed for a given scenario… and that’s something I DO see in practice). Instead, I think the reason for Occam’s razor is that historical experience has shown that simpler theories (with equal predictive value) are more likely to be correct.
Honestly? I don’t believe that for a minute. You’re looking under the street light because it’s dark where you lost your keys.
In general reality is complicated. I expect most true explanations to be correspondingly so. The reason why most solutions are the simpler one is because we probably weren’t able to solve the complicated ones, so we’ve got a massive cases of selection bias going on.
Pingback: Linkschleuder (30) | Die Welt ist gar nicht so.
Pingback: Customers who read this blog might also like… | David R. MacIver
Pingback: Best of drmaciver.com | David R. MacIver
Pingback: Notes on instrumentalist reasoning | David R. MacIver