Category Archives: voting

The Borda Count as a Randomized Limit of Approval Voting

I just noticed this and it seemed worth writing down. It may be a well known result, but 30 seconds of Googling doesn’t turn up everything obvious.

It turns out, that under a certain (highly unrealistic) model of voter behaviour, if you use Approval Voting then you get the Borda Count winner.

In Approval Voting you get to cast a vote for as many candidates as you like, and then the candidate with the most total votes wins.

The question is: How many votes do you cast? Do you cast a vote for only your top candidate (in a close race you probably should)? Do you vote for everyone except the candidate you think is literally Hitler?

I’m actually not sure what the generic tactical answer to this is. For the sake of this post it doesn’t matter, because I’m going to be looking at the following model: What happens if everyone just picks the answer at random? i.e. you have \(N\) candidates, they rank the candidates in order of preference, and then they pick a number c between \(1\) and \(N – 1\) and vote for their top c candidates.

And the answer is that what happens is that you elect the Borda winner.

The Borda Count works as follows:

Everyone ranks every candidate. For each voter, a candidate is given a score: The top candidate gets N, the second candidate gets N – 1, and so on. The candidate with the highest total score wins.

The reason why approval voting results in the Borda winner is quite straightforward: The expected number of votes a voter gives a candidate is just an increasing linear function of the Borda score they would give that candidate.

If you rank a candidate in position t, then you vote for them unless you choose \(c < t\), which happens with probability \(\frac{t – 1}{N – 1}\). So the expected number of votes is \(0\) if this happens and \(1\) otherwise, i.e. \(1 – \frac{t – 1}{N – 1} = \frac{N – t – 2}{N – 1} = \frac{1}{N – 1} B – \frac{1}{N – 1}\), where \(B\) is the Borda score.

Because the the Borda winer is just determined by adding up scores, the scaling doesn’t affect who wins. So assuming that the populace is large enough, the randomization more or less averages out and you elect the Borda winner.

Now, as mentioned, this is a highly unlikely model of voter behaviour – it’s probably just about valid if you have a very small number of candidates (3 or 4), but for larger number of candidates I would expect the number of votes actually cast to be bunched up around either \(0\) or \(N\) with fairly sharp drop-offs moving away from that. I don’t actually know if that’s how it works in practice, but whatever happens in practice it’s very unlikely to actually be uniformly at random.

So I present this mostly as a curiousity rather than as making any point about the actual merits of either system, but I thought it was interesting.

This entry was posted in voting on by .

Randomizing Lean Coffee

I went to a Lean Coffee event last night. I really enjoyed it.

But naturally I’m constitutionally incapable of participating in a democratic process without thinking about whether it could be improved by randomization, and I think this one can.

Lean Coffee as we practised it had slightly more structure than described in the link:

  1. First we voted as described (though actually we had three dot votes for some reason), and then we went discussed the topics in order of decreasing vote.
  2. A discussion lasted 8 minutes. A vote was then conducted as to whether people wanted to continue – everyone could vote yes/no/maybe, and you continued for another 4 minutes if there were more yes votes than no votes (maybes are ignored). In theory at the end of that 4 we would vote again, but we actually never voted to continue.

There are two voting mechanisms here, and one is much better than the other: The continuation vote is a good system because you’ve got a simple yes/no question and thus majority rule is a straightforward choice (arguably you can improve it with randomization, but it’s a hard sell).

The voting on topics thing on the other hand didn’t work very well for me. There was a lot of obvious tactical voting caused by both the system and the public nature of the voting. e.g. people were very reluctant to cast their last vote for something that didn’t already have votes.

So I’d like to propose the following alteration to the system:

  1. People write cards and just put them into a central pile. Once everyone has finished, the pile is shuffled.
  2. You now go through the pile in the shuffled order. You draw a card, the person who wrote it has a short opportunity to explain what it means (normally this happens before the voting phase), and then everyone votes using the majority rule of yes/no/maybe whether they would like to discuss it. Break ties by treating them as no votes.
  3. If the conclusion is yes, you discuss it as above. If it’s no, you move on to the next card.

I think this has a number of advantages over the existing system:

  1. It removes the element of tactical voting (for the most part. You get an element of time tactical voting where you vote something down because there’s something later you want to discuss more, but that seems quite reasonable to me)
  2. What you end up discussing has a much stronger mandate because you’ve had an actual vote on the specific question of whether you want to discuss it.
  3. We got a lot of ties which had to be broken arbitrarily in the normal voting for what to talk about process. This intrinsically doesn’t have that problem.
  4. It has the usual debiasing advantages of randomization which help offset the tyranny of the majority.
  5. It’s actually a more streamlined process – you go straight from writing to discussion with no intervening steps.
This entry was posted in voting on by .

Gerrymandering take two

So yesterday I wrote about how you can use Z3 to solve Gerrymandering. I was unimpressed by Z3’s performance so I decided to rewrite it with a MILP solver instead. It took me a while to figure out a good encoding (which is why I didn’t do this in the first place), but eventually I did and unsurprisingly the results were just infinitely better. Instead of struggling with 20 districts the LP solver based approach starts to take a couple of seconds when you ask it to solve 1000 districts. Based on past experiences, I’d place bets that more than 50% of that time was PulP having a hard time rather than the underlying solver.

This very much reaffirms my prejudice that despite most open source MILP solvers being abandonware with flaky tooling and despite SMT solvers being the new hotness which solve vastly more problems than MILP solvers do, SMT solvers are bullshit and should be avoided at all costs whileas MILP solvers are amazing and should be used at all opportunity.

Of course, then I was staring at the shape of the solutions and I realised that I was being ridiculous and that there is an obvious greedy algorithm that gets the correct answer 100% of the time: Starting from the smallest district and working upwards, put just enough people in that district to win it. If you don’t have enough people to win it, put the rest of your people wherever because there are no more districts you can win.

I’ve posted some updated code with all three implementations along with some tests that demonstrate they always produce the same answer (using Hypothesis, naturally)

This entry was posted in Hypothesis, Python, voting on by .

Optimal gerrymandering with Z3

Edit to add: See next post for why this was a silly thing to do.

As some of you might recall from before I spent a year writing about Hypothesis, one of my other interests is voting theory. I may be doing a talk about it at some point soon, and am reading about it.

I was thinking about Gerrymandering, and I was wondering what the best (worst) possible degrees of gerrymandering were.

For example, suppose you’ve got a population of 9 split up into 3 districts of 3. If you have 6 people out of nine who want to vote blue, it’s easy to get all three seats as blue (you put 2 in each district), and it’s easy to get only two (put all 6 of your supporters into two districts), but can you be gerrymandered to get only one? It seems implausible, but how would we prove that?

Well, as you probably guessed from the title, it turns out this is one of those things where rather than proving it by hand we can just ask a computer to do it for us! (It’s actually quite easy to prove this specific case, but I was interested in more general ones).

Here’s a Z3 program to work out the answer:

Running this, the answer is that if you have 6 people then you’re guaranteed at least two seats, but if you have 5 then you can be gerrymandered down to only one seat with the opposition claiming 2 (by your opponents splitting their four across two districts so you end up with 3, 1 and 1 in each district respectively).

It takes a list of districts with their populations and a total number of people who support a party, and tries to arrange those people in a way that maximizes the number of seats that party gets.

It only supports two party systems. It wouldn’t be too difficult to expand it to a larger number of parties though. It also doesn’t work very well for large (or even medium really) numbers of districts. It handles 20 districts in about 10 seconds, but the time seems to go up extremely nonlinearly (it handles 10 in about 20 milliseconds) and it doesn’t seem inclined to ever complete with 30 districts. This might be fixable if I understood more about performance tuning Z3.

It may be possible to improve this using the Z3 Optimize stuff, but when I tried to use Optimize I segfaulted my Python interpreter so I decided to steer clear (stuff like this is why I’ve yet to seriously consider Z3 as a backend for anything interesting unfortunately). Instead this just binary searches between a set of constraints to find the best value.

This entry was posted in Python, voting on by .

Why I am no longer #YesToAV

As you might recall, there was a referendum four years ago that would have significantly affected the results of the upcoming general election.

Entrenched power very successfully persuaded the British public that electoral form was dangerous and scary and might cause the wrong person to win, and they correspondingly voted against it.

I was, and am, very angry about this.

What I didn’t notice at the time was that the entire referendum was a massive con, that every option was a bad one, and AV would quite possibly have been even worse than the status quo.

Viewed as an alternative to FPTP in the abstract, AV is undoubtedly the better system. It’s not a great system but say we were to want to e.g. elect a mayor then AV is unambiguously preferable to FPTP (indeed, the Mayor of London is elected with a slightly weird and limited form of AV – it’s AV where you can only choose a first and second preference).

The problem is that electing a parliamentary body like the house of commons is not like electing a mayor, because you’re performing the same class of vote many times. This is what gives rise to things like Gerrymandering, but it also means that choosing apparently better electoral systems can give you globally worse results.

AV was sold as reducing tactical voting by allowing you to vote for your actual first choice preference first and not “waste” your vote. The problem is that in reality your first choice vote ends up being a red herring if it’s not for one of the two majority parties, because your candidate will drop out of the race anyway. Sure, you nicely signal your support, but ultimately the only thing about your vote that gets counted is whether you prefer Labour or Conservative.

The problem is that although AV raises the number of votes a minority party will get, it also raises the number of votes they need to get by an even greater margin.

Under FPTP, a minority party can and often will get a seat in a divided area because parties frequently win with a relatively small fraction of the vote, and because voters can “punish” the majority party they favour by defecting to a smaller party. Under AV, this doesn’t happen – they still vote for a major party, and the result is that rather than benefiting smaller parties as was claimed, AV reinforces the two party system.

Does this make AV the “worse” system? No. But I think it’s fair to say that it makes implementing AV at the constituency level a really bad idea. Fundamentally this is a problem with constituencies more than it is a problem with AV, but that doesn’t make implementing AV at the constituency level a good idea just because it’s “not it’s fault” that it would make things strictly worse.

I still think it’s clear that some form of electoral reform is required, ideally some flavour of proportional representation, and if the referendum came up again I would probably still vote yes simply to be able to signal that electoral reform was needed, but I think it’s important to realise what a massive scam that entire referendum was, and reflect on the complete lack of honesty and/or competence that that entails in the people proposing it to us.

This entry was posted in voting on by .