# Another take on high variance strategies

So I was thinking about my high variance strategies post and I realised that there was a case I hadn’t considered which is kinda important.

Which is that often you’re not very interested in how good your best solution is, only that it’s at least this good for some “this good” bar. e.g. you don’t care how cured cancer is as long as it’s pretty cured, you don’t care how many votes you got as long as it’s enough to win the election, etc.

So for these circumstances, maximizing the expected value is just not very interesting. What you want to do is maximize $$P(R \geq t)$$ for some threshold $$t$$. The strategies for this look quite different.

Firstly: If you can ensure that $$\mu > t$$ the optimal strategy is basically do that and then make the variance as low as you can.

For the case where you can’t do that the region of which is better, variance or mean, becomes more complicated.

Let $$F$$ be the cumulative distribution function of your standardised distribution (this can be normal but it doesn’t matter for this). Then $$P(R \geq t) = (1 – F(\frac{t – \mu}{\sigma}))^n$$. This is what we want to maximize.

But really what we’re interested in for this question is whether mean or variance are more useful. So we’ll only look at local maximization. Because this probability is monotonically decreasing in $$g(\mu, \sigma) = \frac{t – \mu}{\sigma}$$ we can just minimize that.

$$\frac{\partial}{\partial \mu} g = -\frac{1}{\sigma}$$

$$\frac{\partial}{\partial \sigma} g = -\frac{t – \mu}{\sigma^2}$$

So what we’re interested in is the region where increasing $$\sigma$$ will decrease $$g$$ faster than increasing $$\mu$$ will. i.e. we want the region where

$$- \frac{t – \mu}{\sigma^2} < -\frac{1}{\sigma}$$

or equivalently

$$t – \mu > \sigma$$

i.e. $$t > \mu + \sigma$$

That’s a surprisingly neat result. So basically the conclusion is that if you’re pretty close to achieving your bound (within one standard deviation of it) then you’re better off increasing the mean to get closer to that bound. If on the other hand you’re really far away you’re much better off raising the variance hoping that someone gets lucky.

Interestingly unlike maximizing the expected value this doesn’t depend at all on the number of people. More people increases your chance of someone getting lucky and achieving the goal, but it doesn’t change how you maximize that chance.

This entry was posted in Numbers are hard on by .