Category Archives: voting

Stating the probabilistic Gibbard-Sattherthwaite Theorem

So as mentioned in an emergency edit, the version of the Gibbard-Sattherthwaite Theorem for non-deterministic voting systems that I mentioned in my last post is wrong. The set of voting systems permitted by it is in fact notably larger than I claimed (I got my version of it from this rangevoting.org post. Ah, trusting third party reporting of science by people with an agenda).

So now I’m reading the original paper. I’m struggling a bunch with the details, but I at least understand the result and think I can see my way towards constructing an alternate proof that makes more sense to me.

Here is the true version of the theorem as near as I can tell:

Consider a voting system under the following setup:

  1. There are N candidates
  2. Each of M voters must submit a ballot consisting of a full ordering of the N candidates
  3. The votes are aggregated in some way to produce a lottery over the N candidates

Further, assume that voters are Von Neumann-Morgenstern rationalists with full preferences over lotteries arising from expected values of utility functions, and that they have strict preferences amongst the candidates (i.e. there are no two candidates to which they assign the same utility).

A voter’s ballot is honest if whenever they rank candidate x better than candidate y their utility function \(U\) has \(U(x) > U(y)\).

A voting system is strategy-proof if there are never circumstances where a single voter can change their vote from an honest one to a dishonest one and get a lottery they strictly prefer in expected utility.

The theorem is that any strategy-proof voting system is a probabilistic mixture of the following types of system (that is, we choose from any number of items of this type with fixed probabilities independently of the votes cast):

  1. Fixed, meaning it always elects a specific candidate
  2. Unilateral, meaning there is a single voter such that any other voter can change their ballot without affecting the outcome
  3. Duple, in the sense that there are only two candidates it can elect (fixed is a special case of this where it can only actually elect one)

Three examples of this are sortition (it is an equal mixture of each of the possible fixed systems), random ballot (it is an equal mixture of each of the unilateral systems which elect the candidate’s top choice) and random pair (it’s an equal mixture of majority vote between each of two possible pairs of candidates).

But these aren’t the only examples. For example, a system which randomly chooses between a fixed voter’s top k choices is unilateral and strategy-free. So is a system which chooses between two fixed candidates, picking a candidate with probability equal to the fraction of people who prefer that candidate. There is a lot of flexibility in terms of how you construct the unilateral and duple systems which is ignored by random ballot and random pair.

How might we prove the theorem? Well… I haven’t quite understood the proof yet, truth be told. I’ve got bits of it lying disassembled in pieces hanging around on the shop floor of my brain, and I’m getting there, but so far I don’t have anything coherent enough to blog about. Once I do I’m hoping to try to slim it down and blog about it. We’ll see. For now though, I just thought it was important to have a correct statement of the theorem online.

 

This entry was posted in Numbers are hard, voting on by .

The other randomized voting system

The title of this post is of course misleading. There are myriad randomized voting systems. But there are two (well, sortof three) which have a specific special property: That of being immune to tactical voting. In “Manipulation of Schemes That Mix Voting with Chanceh” by Allan F. Gibbard in Econometrica (which I’ve never actually read due to the great academic firewall) demonstrated that if you have a ranked voting system such that when there are at least three candidates running and:

  1. There is no dictator
  2. Anyone who receives a unanimous votes wins
  3. There is no incentive for a voter to lie about their preferences

then it is one of the following:

  1. Random ballot. i.e. pick a voter at random, use their first choice
  2. Random pair. Pick two candidates at random, use the one that more people have ranked first
  3. A mixture of the two in the sense that we randomly choose between them with some probability

EMERGENCY EDIT: this version of the theorem, which I originally got from rangevoting.org’s reporting of it, turns out to be a complete lie and a much richer class of systems is permitted. I owe you one correct accounting of the actual theorem.

I’ve written quite a lot about random ballot. You might have noticed. I’ve never written about random pair as far as I can remember, despite the fact that I think it’s quite interesting.

Why? Well because it looked like it was impossible to make work in practice. It’s so easy to game – people can’t feasibly rank a large number of candidates, so you have to make do with a small number of candidates, and then it’s easy to win by stuffing the ballots with people you agree with so that people get a choice between two options which you both like.

I realised in the shower this morning (I say that a lot. The shower is where I do some of my best thinking) that this reasoning is in fact completely false. There is a practical voting system which is in some sense equivalent to random ballot (the way it’s run may distort the results, but if people truly have a transitive ordering amongst candidates decided in advance then they have no incentive to vote otherwise in this system) and you can easily run it on as many candidates as you want.

In particular you can run it on a sortition. I suspect for many cases where a sortition is desirable this may be a better system. It will tend to shift things towards the prevailing biases of the country (e.g. it’s possible that it will be more gender biased than a sortition), but it may also bias more heavily towards competence.

How?

Well the key realisation is that the only preferences that matter are between the candidates who are randomly chosen. So you don’t need to ask about the other preferences.

This is what you do: Rather than rank then choose, you choose then vote. You pick two candidates at random from the eligible populace, then you do a run-off vote between them in which people simply select which candidate they prefer and the one with the majority of votes wins. If people vote in line with their transitive preferences, this is equivalent to random ballot.

In order to do this in a way that doesn’t bias towards recognisability, here’s what you do in practice: A year before the election you select the two candidates (people have the opportunity to refuse, in which case you select a new one). You now pay each of them a decent salary for the next year and give them a staff of campaign advisors. They now have the next year to campaign for why they should be elected MP. At the end of that year you do the vote.

What are the characteristics of this? I don’t know. It’s obviously vastly more unstable than random ballot – the chances of the same MP being elected twice in a row are essentially zero. This is a downside. On the other hand, you lose some of the privilege bias where recognition leads to votes.

A good solution might be to adopt a mixture approach – you can either do this with a coin-flip for selecting which one you use or you can do this by electing two MPs per constituency, maybe on alternating cycles.

Potentially you could also use one for the commons and one for the lords. This might be an interesting way to elect the lords if every time a new lord is called for you run this procedure over the whole population.

In all honesty I’m still not sure this is a good idea, but it’s gone from a completely impractical idea to one that we can usefully reason about the consequences of, and that makes it interesting to think about.

This entry was posted in voting on by .

Other ways to improve democracy by picking your politicians at random

Obviously I’m a big fan of randomisation, and I think it would be interesting to use it in voting systems. The big manifestation of this is my 80%-serious endorsement of using random ballot for electing the house of commons.

I thought it might be interesting (well it’s interesting to me which means you, the reader, get to come along for the ride) to point out two and a half other interesting ways you can use randomization to improve your democracy. I think these are all both less obviously a major improvement and maybe more obviously good/politically tenable ideas than more perfect democracy proposal.

Sortition for the house of Lords

I really like having the house of Lords as a concept for keeping the commons in check. I’m not convinced it always works very well, but I have to admit my level of attentiveness is not high enough that I’m entirely sure that I would.

Continue reading

This entry was posted in voting on by .

Towards a more perfect democracy

Note from the future (2019): There are much more mainstream proportional representation systems than this one. I do think this one is a great idea, but it’s a political nonstarter, so don’t get too hung up on it.


A long time ago I wrote A “perfect” voting system. It’s my most popular post ever. Unfortunately it’s really badly written. This is an attempt to write a replacement version of it. I’ve written a lot about the idea since, and this will incorporate some of that, but it’s very much intended to be a stand-alone piece.

Modern British democracy is broken. It’s not a problem unique to us by any means, but it’s a problem we feel acutely. We have an entrenched two party system, with the Liberal Democrats being the only real contender for a third candidate. Small parties find it incredibly hard to get a foot in the door because the system works in such a way that the amount of representation you get is massively out of line with the amount of support you have. Regardless of whether you believe full proportional representation is a desirable goal, what we have is so far from it that it’s hard to even call it democracy: Most peoples’ votes simply don’t matter because they live in a constituency which is “safe”: In the 2010 general election, the electoral reform society predicted the outcome of 382 safe seats. They got two wrong.

Much of the blame for this has been leveled at the first past the post voting system. This is entirely fair – it really is an awful system. However equally a lot of the blame can be leveled at any of the commonly used systems when applied to geographic constituencies: It causes the geographic distribution of your voters to matter intensely.

One way this problem manifests is through gerrymandering: By manipulating the boundaries you can significantly change the number of seats a party gets. Here’s a good explanation of how it works.

It’s important to note that this is true with any of the normal class of voting systems applied to each constituency: If you have a system where in a two party fight the person with the most support always wins (which is true of basically every system commonly proposed. Indeed you might currently be thinking that it’s a property every system should have. Hold on to that thought) you can gerrymander to manipulate the result.

The obvious solution to this is to switch away from a regional constituency system to a  proportional representation based one. This is a perfectly valid solution and one that works for a number of countries.

I’d like to propose a… well not an alternative per se, in that the system I am proposing is also in many ways a proportional system. A different way of achieving proportional representation which preserves the regional constituencies.

Why might we want that?

Well the main reason from my point of view (there are many other reasons, and different people will find different ones important) is to ensure every region of the country has its interests represented: It’s all very well having proportional representation by party, but surely you also want proportional representation by region? Otherwise you can end up with your political interests represented but your local interests neglected.

There are ways to achieve this under PR: Essentially you form larger constituencies, have each be multi-member and elect those members under a PR system. This is an entirely respectable hybrid solution.

It does however end up inheriting the problems of both worlds – the smaller your regions are the more vulnerable you are to gerrymandering, the larger your regions are the less individual areas are represented. There is surely a decent middle ground that trades off optimally between the two, but what that is depends on your priorities, and both will remain at least partial issues.

Fortunately it turns out there is a way we can do better. There is a system with small constituencies in which everyone votes locally but we get global proportional representation.

The system in question has many major advantages. In particular:

  • It has true proportional representation across every axis, not just party. If 40% of votes go to people who support a specific issue, about 40% of candidates elected will support that issue. The same is true of things like race, class, gender, etc. This system might actually successfully reduce the prevalence of old rich white men in parliament (they’d probably remain a majority for some time due to being who people vote for, but it should even out)
  • Despite being a regional method it is completely immune to gerrymandering. The only way to distort the results by manipulating regions is to make some regions larger or smaller than others, which is intrinsically a problem with regional systems and comparatively easy to avoid.
  • There is no incentive for tactical voting. The optimal vote for you to cast is a tick next to the name of the person you most want elected.
  • There are no safe seats. Politicians remain extremely accountable to their electorate, regardless of whether they have a majority. They will always want and need more people to vote for them unless they literally have 100% of the vote (though they’ll probably be feeling pretty comfortable at upwards of 99.9% of the vote I imagine).

This sounds like a pipe dream. There are mathematical theorems (Arrow’s impossibility theorem and the Gibbard–Satterthwaite theorem) which constrain the possibilities for voting systems. How does this supposedly perfect voting system defeat the mathematics?

Well, it does that simply by not being part of the class of voting systems to which they apply. Arrow is very restrictive indeed – it only applies to preferential voting systems. Gibbard-Satterthwaite is less restrictive – it says that any non-dictatorial deterministic system which can elect at least three candidates is subject to tactical voting.

It’s time to come clean and stop teasing: We avoid this by not using a deterministic voting system. The system we use instead is one called Random Ballot. Conceptually it works as follows:

  1. Every voter casts a single vote for their preferred candidate, as under our current system of first past the post
  2. When all the votes are cast, we select a random voter and elect the person they voted for

This is not an implementation strategy you would actually want to use (it’s quite fragile), but fortunately there is a robust equivalent strategy that I will mention later. So if you have practical implementation based objections to this idea, hold on. They’ll be addressed. For now, keep this in mind as your mental model of how the system works.

Lets look at the consequences of this systems.

Functionally what we are doing is saying that each candidate has a chance of winning equal to the fraction of the votes they hold. Say for the sake of simplicity we have three parties and 100 votes in a constituency. The Red party have 60 votes, the Blue party have 35 and the Teal party have 5. You can think of each of these votes as lottery tickets with a guarantee that someone will win – the more you’ve bought, the more likely you are to be that winner. So Red are a bit less than twice as likely to win as Blue and are twelve times as likely to win as Teal. But even poor little Teal there is in with a shot at winning – the fact that they’re a minority party means they’re very unlikely to but it’s not completely infeasible.

So what I’m saying is that a party with a tiny minority can sometimes get into power in a seat. Is that not massively unfair?

Well, no, not really.

The thing about random systems is that if you run them enough times they start to look very predictable – sure you can’t predict what any given one will do, but you can make pretty good  bets as to the number that will do a specific thing.

If we were electing a president, random ballot would be a colossally bad idea. We don’t elect many presidents (in this country none at all, but that’s not important right now), so the randomness factor will play up massively. On the other hand when you’re electing 650 constituencies at once as we are in a general election, you’re running the result so many times that the results start to look downright deterministic.

Supposing you capture 1% of the popular vote. Under this system we expect you to have about 1% of the seats – i.e. about 6 or 7. You might have more, you might have less. But you’re not going to have a lot more or a lot less. It would be surprising if you had more than 10 or fewer than 2 or 3. It would be extremely surprising if you had no seats or more than about 15. It’s pretty much inconceivable that you’d get more than 20 – you’re better off banking on winning the national lottery and using that to fund your next election campaign to improve your chances.

This averaging effect also works over multiple general elections, though there are fewer of those than seats so its effect is necessarily a bit more random. But assuming its numbers stay roughly the same our hypothetical constituency would be Red about 60% of the time, Blue about 35% of the time and Teal about 5% of the time.

One thing that often concerns people about this system is the possibility of it putting a party into power over all who doesn’t deserve it. Allow me to state categorically: This doesn’t happen. If you’re going to get a majority of seats, you need to get incredibly close to a majority of votes. At around 48-49% of the popular vote you’re in with a chance at a majority. At 50% it’s a coin flip. Much above 50% you’re probably going to get that majority. Compare this with the current system: Literally the only general election under the current system where the party in power has had the majority popular vote happened in 1931. There have been some extremely high 40s (in 1955 and 1959 for example the conservative party had 49.7% and 49.4% of the popular vote), but we have never since that point broken 50%. If anything, election by random ballot is an excellent defence against parties who gain power without real support.

And remember: All of these things are true not just for parties, but for every other characteristic you choose to imagine. Are our parties not actually along our desired political lines? No problem! Not only can we vote for independents who better support us, parties become free to run multiple candidates in a given constituency which reflect different aspects of their ideology: If you’re a progressive party you can run two candidates, once who is more economically progressive than the other to see if you’re getting the social or the economic vote. Yes this splits the vote but that’s OK because it doesn’t reduce the total vote for your party (and may even increase it): If the votes simply split between your two candidates then each of them is less likely to be elected, but the chances that one of them will get elected remain the same.

So that’s why it’s fair and balanced at the country level, but what about our constituencies? Are we sacrificing their good for the common good by making them take candidates that they don’t want?

It turns out not.

You see, although our candidates may not be from the party that the constituency wanted, they have a really strong incentive to make their constituency as happy as possible.

Right now, and indeed this would remain true with any deterministic regional single-candidate method, suppose a party captures 60% of the vote in a constituency. What do they do about the remaining 40%? Well, they ignore it mostly.

If you manage to capture a strong majority under a more traditional system then you can pretty much rest on your laurels. You have a safe seat, it’s going to be hard to kick you out unless you manage to really alienate your constituents.

Under random ballot every vote counts. Those 40% who don’t vote for you? That  represents a 40% chance that you will not be re-elected. If you want to keep your cushy job in parliament you’re surely going to do your best to appeal to every voter in that 40% and convince them you’re a great MP – anything to get your numbers up and boost your re-election chance.

It’s very much the AV “Make your MPs work for their seat” position, only more so. The MPs must care profoundly an deeply about their numbers of votes, and it’s in their best interests to keep their local populace happy. It’s representative accountability on an unprecedented scale.

So we have a system which is pretty close to ideal and a local and national level. What’s not to like?

There are some common objections. I will try to address them.

The implementation you describe is fragile and easy to game

This is true. Actually selecting a random ballot is a bad way to implement this – there’s possibility for sleight of hand, it’s very hard to verify and the whole election can come down to a debate about whether a specific ballot is spoiled. Here is how to fix that:

Rather than selecting a random ballot, we count up the votes as we currently do. Calls for recounts are permitted, but must be performed before the next stage. Once a candidate has been elected it is too late to call for a recount.

What we then do is use these counts to simulate drawing a random ballot: We create one virtual ballot for each counted one, lay them out in order and use a computer program to pick a random one.

This program must be open source so that it can be verified and run by third parties. It uses a pseudo-random number generator seeded by a commitment scheme from each of the candidates to be elected and an impartial administrator. These commitments and their corresponding secrets are published at the time of the election.

If you didn’t understand that, don’t worry. The highlights of it are:

  • The results are entirely reproducible. Although they are effectively random they are generated from a process with known inputs that are kept secret up until the time of the election. Given those (published) secrets the full result can be reproduced
  • Any third party with a computer can verify the results are correct
  • It requires a collusion between every single candidate plus the election administrator to fix the results

People just won’t understand it

I’m getting a little tired of this objection, truth be told. It’s the most common objection I get to the idea, and there’s definitely some validity to it.

But the framing makes me uncomfortable. You see every single person I have explained this scheme to in person has got it fairly rapidly, as have most people who have read about it online – there have been objections, but they’ve generally been objections that showed that they understood the core idea but thought it had other crucial problems with it. Granted there’s selection bias going on here – the audience for my blog and the people I’ve described it to are definitely more mathematically inclined than most, but it definitely suggests that it’s really not all that hard to understand.

This makes this objection much more along the lines of “Well I understand it obviously, but those stupid proles won’t, will they?”

I’m pretty uncomfortable with this sort of paternalism.

Yes, understanding the fairness of this system is tricky and involves maths and probability, which people aren’t that good at. Yes, it would require education for people to truly understand it. But participation in the system is incredibly easy, for people who truly care enough to want to understand the details it’s decidedly within their capabilities, and for those who aren’t that interested they will still be much better off participating in a fair system they half understand than an unfair one they fully understand.

Why not just use a sortition?

Excellent question! I’m glad I asked me.

For those who don’t know, a sortition is when you simply draw your MP at random from the eligible population.

Using a sortition instead of my proposed system may in fact be an entirely good idea. One of the nice things about this system is that it transitions seamlessly into a sortition if people decide they want one – Anyone can run, and if everyone runs then what you get is precisely a sortition. Some of the practical details will need changing and scaling, and you’d need to lower or remove the barrier to entry of becoming an MP (there’s a deposit you pay at the moment which you get back if you get enough votes), but fundamentally there’s nothing stopping this behaving exactly like a sortition.

Here are some of the possible advantages of this system over a sortition:

  • It keeps more experienced candidates in the house, much like traditional voting systems.
  • It has much higher accountability of candidates to those whom they represent – under a sortition you have no chance of re-election, so no incentive to endear yourself to your constituency. Under this system you are strongly encouraged to do so.
  • Not every member of a group is well suited to representing that group. Disadvantaged groups and those who experience prejudice pay a much greater cost for being in the public eye, and as a result many of them will be strongly disinclined to speak out for group issues. By allowing them to delegate to someone they feel will represent those issues well and fairly you are in fact giving them more power rather than less.

What if a party with barely any votes gets a seat?

There’s no doubt: This can happen. A party with 1/1000th of the vote has about a 50% chance of getting a seat (this is not unreasonable – they’re “owed” about 0.65 seats). They’ve even got a small but appreciable (about 1 in 2000) chance of getting as many as 5 seats. If the party is some extremist group that you don’t want to get power this can be perceived as bad. How dare parliament represent this group?

First off, the important thing to note is that this balances out. You’re not going to get a parliament which is dominated by many tiny parties that together are not capturing an appreciable proportion of the vote. The bottom 1% of the vote occupies no more than about 20-30 seats in parliament – anything above that is less probable than a meteor strike. It’s an appreciable voting block to be sure, but it’s not a unified one – all the different minority party voices are likely to be at odds with each other rather than a single reinforcing block.

But back to the single party case, what do we do if an unpleasant group like the BNP get themselves a couple seats in parliament?

My inclination is to sit back and watch the show.

British news has a bit of an obsession with our fascist minority parties, far out of proportion to their actual voice. I think the only reason they can get away with this is that there’s no visible signal for how much support they get. It’s much easier if you can go “Yes, but you only have 1 seat in parliament. Clearly no one cares that much about you, do they?”. Also, honestly, I expect them to do a pretty bad job if they gain power and be laughed out at the next election.

It’s certainly possible that this won’t happen, they’ll actually do a “good” job as MPs and this will build their support. That would be… unfortunate. But if that happens we will have a level playing field to fight them on which we can and should do. In the meantime, we shouldn’t let a fear of fascism be used as an excuse to throw away democracy.

This will limit the experience of MPs in parliament – too many people will lose their second election

This turns out not to be the case. Although any individual may have a high chance of losing their second election, over all the strong proportionality feature means that the total number of MPs who have made it through a previous election is a direct sign of how much people like the current government: The fraction of people who vote for their incumbent MP will be approximately the same as the fraction of MPs in parliament who retained their seat from the last election.

This is a nice feature. It essentially means that people get exactly as much change as they want – if people are generally satisfied with their current representation in the house they will generally vote for the incumbent, if they are dissatisfied they will vote for change. And unlike in the current system they will get it.

It is worth noting that this does tend to limit term lengths. If an MP has only 50% of the vote they’ve a decent chance of re-election once, but they’re unlikely to make it two or three times. On the other hand, if they have 75% of the vote they’ve a decent chance of a few more terms than that. This too is a nice feature: It means the MPs with the most experience are the ones who do the best job at keeping their constituents happy.

It just feels wrong

A friend used a great analogy the other day. When your objections come down to “it just feels wrong” you are no longer making a reasoned argument, you are that guy who is saying “But baby, I don’t want to wear a condom because it just doesn’t feel the same” and you are sacrificing other peoples’ health and wellbeing for your own aesthetic preferences. Don’t do that.

This system is unfamiliar certainly. There is some historic precedent (indeed, Athenian Democracy had a high reliance on lotteries. More recently the Doges of Venice were elected through an outrageously complicated system involving iterating a sequence of votes and lotteries), but it’s been a long time since we’ve done anything like this.

But unfamiliar is not the same as bad, and in the face of such overwhelming advantages, the fact that it’s different from the current extremely broken class of systems should not be taken as a downside. We are used to living in a democracy that simply doesn’t work as one. Is it any surprise that when given the opportunity to live in a more perfect system we find it unfamiliar and strange? Let us have the courage to believe we can achieve perfection, and not let its unfamiliarity deter us.

This entry was posted in voting on by .

The winning strategy for random ballot is not what I thought it was

In a previous post I used a normal distribution to show that if you didn’t have a majority of the vote then your probability of winning a majority of seats in a random ballot was maximized if you had an equal representation in every constituency.

Without the normality assumption, this turns out to be false. What is true is the following:

Theorem: Let \(W_i \sim \mathrm{Bernoulli}(q_i)\) be independent. Let \(q\) be a vector which maximizes \(P(\sum W_i > t)\) subject to \(\sum q_i = \mu\). Then if \(q_i \neq 0, 1\) and \(q_j \neq 0, 1\) then \(q_i = q_j\).

Proof:

Fix \(i, j\). Let \(u = \frac{q_i + q_j}{2}\), \(v = \frac{q_i – q_j}{2}\).

Then doing some algebra which I can’t currently be bothered to replicate in LaTeX (it’s fiddly but easy) we get \(P(W > t) = A + B v^2\) for some constants A, B. We are free to vary \(v\) without changing \(\sum q_i\), so we may choose \(v\) freely to maximize this value.

This expression only has a local maximum at \(v = 0\), but it may also be maximized by the end-points. These end-points occur when at least one of \(q_i, q_j\) is \(1\) or \(0\).

So for any pair \(i, j\) and any vector \(q\), if \(q_i \neq q_j\) and neither are 1 or 0 then we can strictly increase the probability \(P(W > t)\) by redistributing the two coordinates so that either \(q_i = q_j\) or one of \(q_i, q_j\) is \(1\) or \(0\).

This now proves our result almost immediately:

Let \(q\) be a vector that maximizes \(P(W > t)\). Suppose there are coordinates \(i, j\) such that neither \(q_i, q_j\) are \(1\) or \(0\). Then if they are not equal we can find some other \(q\) which strictly increases the probability, contradicting that \(q\) was a maximum.

QED

The question now becomes: What combination of zeroes and ones maximizes this probability?

In general, I don’t know. However some simulation suggests that the answer is quite different than I expected: If \(\mu > t\) then it’s easy, you just set at least \(t\) of the \(q_i\) to \(1\) and you’re done. However it appears to be the case that if \(mu < t\) what you want to do is have \(q_i\) non-zero for exactly \(t + 1\) values of i.

Here are some tables of brute force calculated optimum strategies. It’s all done with floating point numbers so some of these numbers are probably wrong where there’s not much in it, but should give a decent idea of the picture.

100 Constituencies

Popular vote Full constituencies Partial constituencies Victory probability
1% 0 49 0
2% 1 98 0
3% 2 97 0
4% 0 50 3.33067e-16
5% 1 50 3.33067e-16
6% 2 50 3.33067e-16
7% 0 50 1.44329e-15
8% 1 50 1.44329e-15
9% 2 50 1.44329e-15
10% 3 50 1.44329e-15
11% 4 50 1.44329e-15
12% 0 50 3.10862e-15
13% 1 50 3.10862e-15
14% 2 50 1.9984e-15
15% 3 50 3.10862e-15
16% 4 50 3.10862e-15
17% 1 49 6.66134e-15
18% 1 49 6.9611e-14
19% 1 49 6.14397e-13
20% 1 49 4.68026e-12
21% 1 49 3.11595e-11
22% 1 49 1.83468e-10
23% 1 49 9.65132e-10
24% 1 49 4.57618e-09
25% 1 49 1.9709e-08
26% 1 49 7.76285e-08
27% 1 49 2.81309e-07
28% 1 49 9.42896e-07
29% 1 49 2.93716e-06
30% 1 49 8.53927e-06
31% 1 49 2.32595e-05
32% 1 49 5.95608e-05
33% 1 49 0.000143831
34% 1 49 0.000328471
35% 1 49 0.000711237
36% 1 49 0.0014636
37% 1 49 0.00286849
38% 1 49 0.00536505
39% 1 49 0.00959356
40% 1 49 0.0164292
41% 1 49 0.0269891
42% 1 49 0.0425947
43% 1 49 0.0646781
44% 1 49 0.0946258
45% 1 49 0.133574
46% 1 49 0.182179
47% 1 49 0.240411
48% 1 49 0.307415
49% 1 49 0.381478

650 Constituencies

Popular vote Full constituencies Partial constituencies Victory probability
2% 3 325 3.77476e-15
3% 7 325 4.88498e-15
4% 2 325 7.88258e-15
5% 10 325 6.21725e-15
6% 15 325 7.88258e-15
7% 4 325 1.38778e-14
8% 10 325 1.04361e-14
9% 9 325 8.99281e-15
10% 23 325 1.04361e-14
11% 22 325 8.99281e-15
12% 1 325 1.14353e-14
13% 10 325 1.53211e-14
14% 10 325 2.67564e-14
15% 23 325 1.53211e-14
16% 4 325 1.86517e-14
17% 17 325 2.65343e-14
18% 8 325 1.96509e-14
19% 16 325 1.94289e-14
20% 8 325 2.19824e-14
21% 9 325 2.17604e-14
22% 21 325 2.19824e-14
23% 22 325 2.17604e-14
24% 34 325 2.19824e-14
25% 35 325 2.17604e-14
26% 12 325 2.88658e-14
27% 5 325 2.28706e-14
28% 3 325 2.94209e-14
29% 8 325 2.77556e-14
30% 0 325 3.25295e-14
31% 4 325 3.64153e-14
32% 11 325 3.34177e-14
33% 17 325 3.64153e-14
34% 23 325 4.21885e-14
35% 6 325 4.54081e-14
36% 1 324 1.14242e-13
37% 1 324 5.57576e-12
38% 1 324 1.98633e-10
39% 1 324 5.21813e-09
40% 1 324 1.02063e-07
41% 1 324 1.49951e-06
42% 1 324 1.66858e-05
43% 1 324 0.00014173
44% 1 324 0.000926012
45% 1 324 0.00468993
46% 1 324 0.0185649
47% 1 324 0.0579747
48% 1 324 0.144439
49% 1 324 0.291241

(1% value omitted because it gave buggy results, I think due to underflow).

So if we take 0.01% as the “this is kinda plausible” chance (i.e. we expect it to happen about once every 100 elections) and \(10^-7\) as the “You don’t need to worry about this chance”, then with 100 constituencies you’re not going to win if you’ve got 29% or less of the vote and you become plausible at 40%. With 650 constituencies on the other hand you’ve no chance below 40% and become plausible somewhere between 47% and 48%.

So the strategy I had was wrong, but the basic result that this isn’t something we need to worry about seems to hold up.

This entry was posted in voting on by .