How to hard “boil” an egg, redux

About a year ago I wrote a post on how to hard boil an egg. It involves not actually boiling them, plus some other more complicated details.

I’ve discovered a new, refined method. It’s both simpler and more effective. It involves slightly fewer steps than the previous seven step process:

  1. Stick them in a steamer

I was steaming some things anyway and wanted to make some eggs, so it occurred to me to wonder whether steaming would work. Some googling lead me to this post which I promptly proceeded to not really pay much attention to now that I knew the concept worked. I literally just put the eggs in the steamer basket, waited about 10-15 minutes, took them out again and they were perfect.

This entry was posted in Food on by .

A refinement to the auction game

I did some thinking about the auction game I previously designed and I decided on some refinements that I think would make it a better game.

Specifically, ditching the “upkeep” and “cost” numbers.

Instead when you draw a tile it immediately becomes your own (you can of course immediately auction it as one of your moves if you want). You may place it anywhere on your board as usual. This eliminates the cost as a variable that needs balancing, and also means that being behind isn’t a crippling disadvantage.

Upkeep is now played as follows: Your starting tile requires no upkeep. Everything else requires upkeep equal to its distance from the starting tile. Note that removing tiles may change the cost, so be careful!

As a side note I’ve been thinking about the physical mechanism of playing. In particular:

When you place a tile up for auction, put it in the middle of the table. If you win with your reserve, then when you reclaim it you may place it anywhere you like. So an auction action includes a move action for free.

When collecting income: Put $1 on each hex. Now for each hex put an additional $1 on it for each matched colour on its border. Now collect all that money.

When paying income: Put money on hexes equal to their income cost. You may not put money on a hex unless there is already money on some hex connecting it to the origin through a trail of moneyed hexes. When you put money on the hex, it must be one higher than the money on the cheapest connecting hex. Note that if you then backfill you should double check that you’ve not missed taking money off any hexes. Once this process is complete, take all the money off the hexes and give it to the bank.

And, in related news, I have a set of tiles for this game being delivered! I decided this evening to have a play with The Game Grafter and put together a set of hexes for it. I wrote a small python script using PIL to generate a set of hexes, uploaded them, wrote another small python script using requests to auto-proof them (I checked a few, but I had no interest in manually clicking proof on 80 identical hexagons), and sorted. I’ve placed an order and they should be with me in a few weeks.

For your delectation, have some of the hexagons I generated. A work of art they ain’t, but I’m still relatively pleased with them.

hex01

hex19

starting_hex1

starting_hex2

This entry was posted in Uncategorized on by .

But enough about me…

Man, I sure do write a lot on here, don’t I? I practically hog the microphone. Selfish, that’s what I call it. It’s like I think this blog is all about me. Well, if this blog is all about me why isn’t my name on it? Answer me that?

Err.

Right.

Anyway, enough about me. Tell me about you and some of the stuff you’ve been doing recently. Open thread. Please feel free to use it for shameless self promotion of projects, blog posts, etc.

This entry was posted in Uncategorized on by .

Problem solving when you can’t handle the truth

I’m in the process of reading The Alex Benedict series by Jack McDevitt. It’s quite good. Not amazing, but quite good. Our protagonist is an antiquities dealer in the far future. The far future is remarkably un post singularity and is indeed quite like the present only with star ships, AIs and human simulation, but setting that aside it’s an interesting vision of what living in a world with a lot of high tech history is like. Humanity has at the point in this series been space-faring for longer than we’ve currently had written language.

That’s not what this post is about though.

There’s an interesting instance of practical problem solving in the first book. What follows is a moderately spoilery and highly paraphrased version of what happens (it literally has no writing in common with the actual version, which is much better written and longer. This is a condensed version to illustrate the point I’m making):

Our protagonists, Alex and Chase, are on a derelict ship they have found after their own one has been destroyed. It’s old but broadly operational.

Chase: Enemy cruiser bearing down on us. Yikes.
Alex: OK. Lets head out of the gravity well and prepare to go to warp.
Chase: Um. No.
Alex: ?
Chase: This ship doesn’t have FTL. Do you remember the great big hole where the warp engines on this ship are supposed to be?
Alex: Yeah, but maybe they solved that.
Chase: ???
Alex: This ship had to have got here somehow. The computer is claiming the FTL is working. Therefore we should give some credence to the idea that this ship has magical FTL we don’t understand.
Chase: That is the most ridiculous thing I’ve ever heard.
Alex: Look. We’re completely hosed if it’s not true. These people will never let us live. Therefore if we don’t have FTL we’re dead. Therefore there’s no point worrying about that possibility, and we must proceed as if our ship has magical FTL the likes of which we know not.

Unsurprisingly, this being a novel, Alex and Chase get out of the gravity well after a dramatic scene or two and their ship does indeed turn out to have magical FTL powers.

Real life, sadly, is not a novel. In a more realistic scenario it is entirely likely that they would get out of the gravity well, the computer would say “Oh, yeah. Sorry, my bad. Software glitch. It’s totes not possible to go to warp because you don’t have any frickin engines”. At which point the enemy ship would fire the photon torpedoes (note: Actual in book terminology is way less Star Trek than I’m making it out to be) and reduce Alex and Chase to a thin smear of very annoyed hyper-charged particles.

But that’s OK.

Well, I mean, it’s not OK for Alex and Chase. They’re a bit dead.

But it’s OK in the sense that it does not in any way invalidate Alex’s reasoning strategy.

You see, it may look superficially like Alex is trying to answer the question “Do I have FTL capability?”. He has formed a hypothesis (“I have magical FTL capability despite my lack of warp engines”) and he is performing an experiment to test that hypothesis (“I will go out to warp range and press the big red button”).

This is not what Alex is trying to do. Alex is in fact trying to survive.

He does not have any convincing evidence that he has a warp drive. He has strong convincing evidence that he does not in fact. But if he doesn’t then there’s literally nothing he can do about it. There is no feasible solution to the survival problem in that case, so he doesn’t worry about it. He proceeds as if the thing he needs to survive is true and if not, well, he’s dead anyway. Such is life.

This is an extremely powerful reasoning strategy.

It holds true in cases other than certain death as well: In general, when considering what hypotheses to test and what possibilities to worry about, you should consider not just “How likely is this to be true?” but also “How likely is finding out this is true to be helpful?”

This is one of the reasons for Occam’s razor.

Occam’s razor is less a fundamental truth about the universe (there are some arguments in favour of it as such, but I’ve not found them terribly convincing) and more a pragmatic tool for problem solving.

Occam’s razor states that given two theories explaining the data equally well, you should prefer the simpler one.

Why? Well, because the simpler one is far more useful if it’s true. A simple theory is easier to make predictions with. A complex theory might be true, but if it is our life is quite difficult and that’s not very useful, so we should first rule out the more helpful possibilities.

There needs to be a balancing act here of course. If I have two options, one fairly unhelpful but likely and one extremely helpful but pretty unlikely, I should probably spend more time worrying about the former rather than the latter.

If I had to boil this down into a general maxim, I think it would be the following: Take actions which maximize your chance of success, not ones which maximize your chance of finding out the truth.

Sometimes these two paths coincide. Perhaps even often. Sometimes though, you can’t handle the truth, and you’re probably better off not worrying about those cases.

This entry was posted in Uncategorized on by .

Bayesian reasoners shouldn’t believe logical arguments

Advance warning: If you’re familiar with Bayesian reasoning it is unlikely this post contains anything new unless I’m making novel mistakes, which is totally possible.

Second advance warning: If taken too seriously, this article may be hazardous to your health.

Let me present you with a hypothetical, abstracted, argument:

Me: C
You: Not C!
Me: B?
You: *shrug*
Me: A?
You: Yes
Me: A implies B?
You: Yes?
Me: B implies C?
You: … Yes
Me: Therefore C?
You: C. :-(

Does this seem like a likely scenario to you? We have had a disagreement. I have presented a logical argument from shared premises for my side of the disagreement. You have accepted that argument and changed your position.

Yeah, it sounds pretty implausible to me too. A more likely response from you at the end is:

You: Not C!

I will of course find this highly irrational and be irritated by your response.

…unless you’re a Bayesian reasoner, in which case you are behaving entirely correctly, and I’ll give you a free pass.

Wait, what?

Lets start with a simplified example with only two propositions.

Suppose you have propositions \(A\) and \(B\), which you believe with probabilities \(a\) and \(b\) respectively. You currently believe these two to be independent, so \(P(A \wedge B) = ab\)

Now, suppose I come along and convince you that \(A \implies B\) is true (I’ll call this proposition \(I\)). What is your new probability for k\(B\)?

Well, by Bayes rule, \(P(B|I) = \frac{P(B \wedge I)}{P(I)} = P(B) \frac{P(I|B)}{P(I)}\)

\(I = A \implies B = \neg\left( A \wedge \neg B\right)\). So \(P(I) = 1 – a(1 – b)\).

\(P(I|B) = 1\) because everything implies a true proposition. Therefore \(P(B|I) = \frac{b}{(1 – a(1 – b))}\).

This is a slightly gross formula. Note however it does have the obviously desirable property that your believe in B goes up, or at least stays the same. Lets quickly check it with some numbers.

\(a\) \(b\) \(P(B | I)\)
0.100 0.100 0.110
0.100 0.500 0.526
0.100 0.900 0.909
0.500 0.100 0.182
0.500 0.500 0.667
0.500 0.900 0.947
0.900 0.100 0.526
0.900 0.500 0.909
0.900 0.900 0.989

These look pretty plausible. Our beliefs do not seem to change to an unrealistic degree, but we have provided significant evidence in favour of \(B\).

But as a good Bayesian reasoner, you shouldn’t assign probabilities 0 or 1 to things. Certainty is poisonous to good probability updates. So when I came along and convinced you that \(A \implies B\), you really shouldn’t have believed me completely. Instead you should have assigned some probability \(r\) to it. So what happens now?

Well we know what the probability of \(B\) given \(I\) is, but what is the probability given \(\neg I\)? Well \(\neg I = \neg (A \implies B) = A \wedge \neg B\), so \(P(B|\neg I) = 0\). The implication can only be false if \(B\) is (because everything implies a true statement).

This means that your posterior probability for \(B\) should be \(r P(B|I)\). So \(r\) is essentially a factor slowing your update process.

Note that because my posterior belief in B is \(b \frac{r}{P(I)}\), as long as my claim that \(A \implies B\) is at least as convincing as my prior belief in it, my argument will increase your belief in it.

Now. Lets suppose that you are in fact entirely convinced before hand that \(A\) and that \(\neg B\), and my argument entirely convinces you that \(A \implies B\).

Of course, we don’t believe in certainty. Things you are entirely convinced of may prove to be false. Suppose now that in the past you have noticed that when you’re entirely convinced of something, you’re right with about probability \(p\). Lets be over-optimistic and say that \(p\) is somewhere in the 0.9 range.

What should your posterior probability for \(B\) now be? We have \(b = 1 – p\) and \(a = r = p\). Then your posterior probability for \(B\) is \(r P(B | I) = p \frac{1 – p}{(1 – p(1 – (1 – p)))} = p \frac{1 – p}{1 – p^2} = \frac{p}{p+1} = 1 – \frac{1}{p+1}\).

You know what the interesting thing about this is? The interesting thing is that it’s always less than half. A perfectly convincing argument that a thing I completely believe in implies a thing I completely disbelieve in should never do more than create a state of complete uncertainty in your mind.

It turns out that reasonable degrees of certainty get pretty close to that too. If you’re right about things you’re certain about with probability 0.9 then your posterior probability for \(B\) should be 0.47. If you’re only right with probability 0.7 then it should be \(0.41\). Of course, if you’re only right about that often then \(0.41\) isn’t all that far from your threshold for certainty in the negative result.

In conclusion: If you believe A and not B, and I convince you that A implies B, you should not now go away and believe B. Instead you should be confused, with a bias towards still assuming not B, until you’ve resolved this.

Now, lets go one step further to our original example. We are instead arguing about \(C\), and my argument proceeds via an intermediary \(B\). Your prior is that \(A\), \(B\) and \(C\) are all independent. You are certain that \(A\), certain that \(\neg C\) and have no opinion on \(B\) (i.e. you believe it with probability \(\frac{1}{2}\).

I now provide you with a p-convincing argument that \(A \implies B\). What is your posterior probability for \(B\)?

Well, plugging it into our previous we get \(b’ = p \frac{b}{1 – p(1 – b)} = \frac{p}{2 – p}\). Again, checking against some numbers, if \(p = 0.9\) then \(b’ \approx 0.82\), which seems reasonable.

Suppose now that I provide you p-convincing evidence that \(B \implies C\). What’s your posterior for \(C\)?

Well, again with the previous formula only replacing \(a\) with \(b’\) and \(b\) with \(c\) we have

\[\begin{align*}
c’ &= \frac{p c}{1 – b'(1 – c)} \\
&= \frac{p(1-p)}{1 – \frac{p^2}{2 – p}} \\
&= \frac{p(1-p)(2 – p)}{2 – p – p^2}\\
\end{align*}\]

this isn’t a nice formula, but we can plug numbers in. Suppose your certainties are 0.9. Then your posterior is \(c’ \approx 0.34\). You’re no longer certain that \(C\) is false, but you’re still pretty convinced despite the fact that I’ve just presented you with an apparently water-tight argument to the contrary. This result is pretty robust with respect too your degree of certainty, too. As \(p \to 1\), this seems to tend to \(\frac{1}{3}\), and for \(p = \frac{1}{2}\) (i.e. you’re wrong half the time when you’re certain!) we get \(c = 0.3\).

In conclusion: An apparently water tight logical argument that goes from a single premise you believe in to a premise you disbelieve in via something you have no opinion on should not substantially update your beliefs, even if it casts some doubt on them.

Of course, if you’re a Bayesian reasoner, this post is an argument that starts from a premise you believe in, goes via something you have no opinion on, and concludes something you likely don’t believe in. Therefore it shouldn’t change your beliefs very much.

This entry was posted in Decision Theory, Numbers are hard on by .