Author Archives: david

How to legally(?) and efficiently bribe a democracy

I attended Opentech yesterday. One of the highlights of the day was Terence Eden on “Kickbackstarter”. The idea being, in short, to crowdsource bribing your MPs to vote the right way. It’s a distressing and awful idea that nevertheless I think might actually be an excellent solution.

The practical problem, of course, being that bribing MPs is illegal.

So I had a think about how I would solve this problem, and I came up with what I think is a nice solution. I don’t have any idea how legal it is. I have a hard time seeing how it could be illegal, but the law is a strange and complex beast of which I know little, and it’s possible that this might fall afoul of some clauses around attempting to influence the democratic process whilst not being rich or something.

The idea is very simple: We never attempt to bribe anyone. No one is ever given any gifts related to their actions.

Instead what we do is that we enter people into raffles based on their actions as a group. This raffle will be run regardless of the result of the actions, and your entry into it is not contingent on your actions.

So how does this modify your behaviour?

Well, you see, the number of tickets depends on the total actions of the group. The more in line with your preference your MPs act, the more tickets get allocated.

Here’s one example of how it could work:

We decide on a gift (say a gift basket, flowers, book tokens, whatever) of a fixed monetary value. We must have sufficient funds available to send this gift to all MPs, though we’re unlikely to need that in practice.

When a piece of legislation is being voted on, we precommit to give one gift item per MP who votes in accordance with our desires on this bill.

Once the vote has been performed, we do indeed give those gifts. However whether or not you voted in accordance with our wishes has literally no bearing on whether you get the gift (maybe we want to restrict it to MPs who actually voted, as a “Thanks for doing your job” measure). The gifts are randomly allocated, with every MP equally likely to get one. Obviously the more MPs who vote as you wanted the more any individual MP is likely to get a gift, but there is no direct personal incentive to vote this way – you’re just as likely to get a prize if you and someone else swap votes. You are of course more likely to each win a prize if you both vote the “right” way.

So you are given a subtle incentive to vote in accordance with our wishes, but you are not directly being rewarded for doing so.

I think the fact that MPs will get gifts regardless of whether they’re on your side also has the nice advantage that it will raise your awareness with everyone.

It does maybe have the downside that people will be disinclined to vote in a way that might benefit some other MP who they don’t like so much. It’s not a rational decision, but people are often irrational about probability.

Would it work in practice? Dunno. It would be an interesting thing to try. Is it legal? I have no idea. Check with a lawyer before trying it. Is it ethical? I don’t know. I really don’t. It feels skeevy as hell, but I think the ends might justify the means here.

This entry was posted in Uncategorized and tagged , , on by .

Some simple theorems about dominance of probability distributions

This is a post about some very simple results about a particular partial order on probability distributions. It came about because I was talking with Paul Crowley about whether I believed in there being a valid total ordering on probability distributions which says that one is strictly better than the other. My answer was that I did not. After some additional reflection I realised that there was a partial ordering I considered valid. After some additional additional reflection I realised it had a quite nice structure. This is a write up of the things I noticed about it. It’s all quite easy to prove, so the results are probably more interesting than the proofs themselves.

Let \(X, Y\) be natural number valued random variables.

Say \(X \preceq Y\) if \(P(X \geq n) \leq P(Y \geq n)\) for all \(x\). \(X \prec Y\) if \(X \preceq Y\) and \(X \neq Y\). Read this as “Y dominates X” or “X is dominated by Y”. The idea is simply that we always expect \(Y\) to produce better results than \(Z\), if we read larger as better.

\(\preceq\) is a partial order because it’s the intersection of the partial orders \(P(X \geq n) \leq P(Y \geq n)\)

For \(p \in [0,1]\), let \(L(p, A, B)\) be the random variable which consists of drawing from A with probability p and from B with probability 1 – p.

Theorem: If \(A, B \preceq C\) then \(L(p, A, B) \preceq C\). Similarily if \(C \preceq A, B\) then \(C \preceq L(p, A, B\)

Proof: \(P(L(p,A,B) \geq n) = p P(A \geq n) + (1 – p) P(B \geq n) \leq p P(C \geq n) + (1 – p) P(C \geq n) = P(C \geq n)\) as desired. Similarly the other way.
QED

Lemma: \(E(X) = \sum\limits_{n=1}^\infty P(X >= n)\)
Proof: Let \(p_n = P(X = n)\).

\[\sum\limits_{n = 1}^\infty P(X >= n) = \sum\limits_{n=1}^\infty \sum\limits_{m=n}^\infty p_m\]

A term \(p_k\) appears \(k\) times in the right hand side – it is present if in the sum if \(k \leq n\) and not otherwise. So we can rearrange the sum (the terms are all non-negative so this is valid without worrying about convergence issues) to get

\[\sum\limits_{n = 1}^\infty P(X >= n) = \sum\limits_{n=1}^\infty n p_n = \sum\limits_{n=0}^\infty n p_n = E(X)\]
QED

Theorem: If \(X \prec Y\) then \(E(X) < E(Y)\). Proof: Suppose \(X \preceq Y\). Then \(P(X \geq n) < P(Y \geq n)\) for some \(n\), as it's always \(\leq\) and if it were always equal then the distributions would be the same, which they're not by hypothesis. Therefore \(\sum P(X \geq n) < \sum P(Y \geq n)\), because all the other terms are \(\leq\). But according to the lemma this equation is \(E(X) < E(Y)\) as desired. QED Lemma: Let \(x_n\) be a monotonic decreasing real-valued sequence converging to 0 such that \(x_0 = 1\). Then there is a random variable \(X\) with \(P(X \geq n) = x_n\). Proof: Let \(p_n = x_n - x_{n+1}\). Because \(x_n\) is monotonic decreasing we have \(p_n \geq 0\). Also, \(p_n \leq x_n \leq x_0 = 1\). Further \(\sum\limits_{k=0}^n p_k = x_0 - x_1 + x_1 - x_2 + \ldots + x_n - x_{n+1} = 1 - x_{n+1}\). So \(\sum\limits_{k=0}^\infty p_k = \lim\limits_{n \to \infty} 1 - x_{n+1} = 1\). So \(p_n\) is a probability distribution. Let \(X\) be a random variable such that \(P(X = n) = p_n\). Then \(P(X \geq n) = \sum\limits_{k = n}^\infty p_n = \sum\limits_{k = n}^\infty x_n - x_{n+1} = x_n - x_{n+1} + x_{n+1} - x_{n+2} + \ldots = x_n\). QED Theorem: The set of random variables with \(\preceq\) form a lattice. i.e. for random variables \(X,Y\) there is a random variable \(X \wedge Y\) such that \(Z \preceq X, Y\) iff \(X \preceq X \wedge Y\). Similarly there is \(X \vee Y\) such that \(X, Y \preceq Z\) iff \(X \wedge Y \preceq Z\). Proof: To get \(X \wedge Y\) take the sequence \(n \to \mathrm{min}(P(X \geq n), P(Y \geq n))\). This satisfies the conditions of the lemma, so we can find a random variable \(Z\) such that this is \(P(Z \geq n)\). By the definition of min this satisfies the desired properties. Similarly with max to get \(X \vee Y\). QED

This entry was posted in Numbers are hard on by .

More on infinitary decision strategies

I’ve mostly lost interest in further studying the infinitary version of the problem. The simple conclusion seems to be that the structure of the uncountable case is significantly more complicated than the structure of the set of measures on a sigma-algebra, so there’s probably not much more classification one can do. Here are some brief notes on examples:

Lemma: Every set admits a subset consistent strategy
Proof: Every set can be well-ordered. Let \(s(U)(V) = 1\) if \(\mathrm{min}(U) \in V\), else \(0\).

Theorem: Every measure \(m\) can be extended to a subset consistent strategy. That is, there is some subset consistent strategy with \(s(S) = m\).
Proof:

Let \(s\) be a subset consistent strategy on \(S\). Define \(s'(U)(V) = \frac{m(U \cap V)}{m(U)}\) if \(m(U) > 0\), else \(s(U)(V) = s'(U)(V)\).

That this works is a simple matter of case checking.
Let \(U \subseteq V \subseteq W\). We want to show \(s(W)(U) = s(W)(V) s(V)(U)\).

If \(m(W) = 0\) then \(s = s’\) on all the sets in question, so this follows from \(s’\) being subset consistent. Else if \(m(V) = 0\) then \(S(W)(V) = S(W)(U) = 0\) so both sides are 0. Else \(m(W)(U) = \frac{m(U)}{m(W)} = \frac{m(U)}{m(V)} \frac{m(V)}{m(W)} = s(U)(V) s(V)(W)\) as desired.

QED

In fact every subset consistent strategy may be decomposed nearly this way. The only caveat is that the subset consistent strategy \(s’\) ends up being on the set \(S’ = \{x : m(\{x\}) \neq 0\}\).

As proof, just take \(m = s(S)\) and \(s’ = s|_{S’}\).

However somewhat self-evidently this operation isn’t idempotent: Because in the construction we could have taken \(s’\) to be just about anything we wanted, we can do this again and again.

Here’s an interesting example: Let \(A\) be a well ordered set. Let \(S = \bigcup I_a\) where \(I_a\) is a copy of the unit interval \([0,1]\). Additionally, fix some well-ordering of \(S\). Give this the \(\sigma\)-algebra generated by the measurable sets of each \(I_a\).

Lemma: Every measurable set in this \(\sigma\) algebra intersects either countably or coucountably \(I_a\).
Proof: The set of sets which do that is closed under countable unions and complements.

Define \(s(U)(V) = \frac{m(U \cap I_a \cap V)}{m(U \cap I_a)} where \(a\) is minimal such that \(I_a \cap U \neq \emptyset\). If there is no such \(a\) then instead use the well-ordering’s strategy.

Claim: If \(A = \omega_2\) then given this \(s\) there is no well-ordered sequence of decompositions as above that gives \(s\).

Proof: Well, I haven’t really proved it to be honest. It has plausibility though. Here’s my handwavey reasoning:

The decomposition of a set in this involves just knocking off the first initial interval. So you can’t decompose it in fewer than \(\omega_2\) steps. There are then \(\omega_1\) values of \(I_a\) which the \(\omega_1\)th step does not intersect, which is not co-countable.

QED

I’ve done some additional work on the partial orders on sets that such strategies define, but it doesn’t seem to go anywhere very fruitful. I’m going to consider this problem closed for now.

This entry was posted in Numbers are hard on by .

Using two level discourse as a tool of thought

This is a game I thought of to explore ideas. I haven’t tried it yet, so it might be horribly cumbersome in practice.

You will need:

  • A question which you wish to explore competing answers to
  • A partner to play with. I think this works better online than in person, but you can probably do it in person too
  • Some sort of versioned collaborative document system. As a programmer, I’d probably just use git, but google docs or something like etherpad should work too
  • Some means of communicating with your partner. If you’re face to face, that’s easy. If not, use an instant messenging client

You and your partner should take two sides of the question. They don’t have to be sides you agree with. Indeed, if you have a difference of opinion on the subject then you might want to consider taking the side that is the opposite of your opinion.

Your goal is not to debate this topic. This is important. You are not the ones debating.

Your goal is to create a written document in which two fictional characters debate this topic.

In order to achieve this goal, you will play a game. Decide which one of you goes first. They then play their opening move, which is to state their character’s name and belief. e.g. “My name is Dr Cornelius Frugalmuffin. I believe that people have a moral obligation to wear hats”. “My name is Joe Colinsworth. I think people should be free to leave their heads bare if they want to”.

Play now proceeds as follows. You take it in turns. On a player’s turn they may do the following:

  1. They may submit a draft response
  2. They may edit their current draft response
  3. They may commit to their current draft response, at which point play passes to the other player
  4. They may request to wind back time. They pick a previous response from the other player and request to reset to that point. The other player may accept or refuse. If they accept, all responses past that point are deleted, and play proceeds with the other player with that past response as their current draft (which they may just immediately commit to if they want)
  5. They may request to end the session. If the other player agrees, they make their closing statement. The other player then also has the opportunity to make their closing statement. The game now ends

During this play, the two players should be talking on the back channel a lot. In particular the period between initial draft response and submission should be spent discussing the draft, and providing constructive criticism on your partner’s character’s argument.

Care should be taken to make sure that the back channel discussion is kept collaborative and the debate does not move there. This is very much a co-op game. The goal is not to “win” the debate, and a request to end the session is not a concession. The goal is to produce a debate that is as informative as possible and explores the ideas as thoroughly as possible. The back channel is a place for saying things like “I don’t understand this particular point” or “I think there might be a flaw in this argument, but it could be fixed in this way”, not “You’re wrong because”.

The request to wind back time is to give the impression that the characters in your dialogue have thought through the issues much more thoroughly than you have: Anything you learn over the course of the discussion can be sent backwards in time to your characters so that they had already thought of it. This ensures they can give the strongest argument possible for their case.

The back channel can also simply be used to ask your partner for advice. “What’s the argument your character is hoping I’m not going to notice here?” is a legitimate question. As is “You actually know more about the position I’m arguing for than I do. What should I read before making my response?”. Remember: You are not your characters. The goal is not to win. The goal is to produce an interesting and informative fictional dialogue.

This entry was posted in Uncategorized on by .

The moral argument for rationality

Before you read this post, I want you to do something for me.

Find yourself a hammer. If there’s not one readily convenient, it doesn’t matter too much. Anything readily wieldable and fairly heavy will do. I have a butternut squash near me. You could use one of those. If you really don’t have anything to hand and can’t be bothered to go find one, you can just use your fist I suppose.

Now, place your left (right if you’re a lefty) hand on the table in front of you, take the hammer-like object in your other hand and bring it down really hard on the hand you’ve placed on the table.

Done? OK. Read on.

You of course didn’t do this. If you did, I’m really sorry. You should probably take that as a lesson not to trust advice without thinking about it for yourself, or maybe just a lesson that I’m a bit of a dick, but I owe you a cookie. Or a hug or something. My bad.

The rest of you, though, why didn’t you do it?

Well, because it would have hurt.

You don’t need some complicated theory of inferential reasoning to tell you that hitting your hand with a hammer hurts. You’ve got a straightforward feedback process in which your body goes “OW” when heavy things impact you at high velocity. This isn’t news.

The thing that makes making predictions about the world difficult is the quality and strength of the evidence you can gather.

We conduct complicated double blind trials around medicine because it’s really hard to gather accurate and informative evidence around drug effectiveness – effects are statistical, subtle, and prone to confounders like the placebo effect.

We do not conduct complicated double blind trials around whether it’s better to jump out of an airplane with or without a parachute, because it’s really not hard to gather accurate and informative evidence about this. People who fall from high places without a parachute tend to die. People who fall from high places with one tend not to.

Issues which affect you personally have a direct evidence gathering built into them. You generally know how you feel, and you experience the results of things directly. You also have a lot of data points, because you’re on the job being you 24/7 with no holiday time.

Issues which affect other people however are much murkier. Unless you possess secret telepathic powers, you don’t have a direct hotline into their brain and you don’t know how they’re feeling. They might tell you, but by and large people are pretty well conditioned to not do that because it makes them vulnerable and because right after you’ve hurt someone is not the time when they’re feeling most inclined to trust you. You see less of any given one of them than you do of yourself, and they’re all different and confusing.

So determining activities that harm or help yourself is relatively easy, and determining activities that harm or help other people is relatively hard. In order to do the former, some simple common sense reasoning and learning from experience is more than sufficient. In order to do the latter, you need to do a reasonable amount of careful study and control for a lot of confounders.

I’m going to let you in on a spoiler: You hurt other people. This is most likely not because you’re a bad person, it’s just a thing that happens. Sometimes you do it purely by accident. Sometimes you do it because we live in an unjust society and we all implicitly support it in one way or another. Sometimes you even do it with the best of intentions.

Most of this you don’t notice because you don’t have that direct feedback – you can’t feel what it’s like to be that other person, so you don’t get a direct experience of the consequences of your actions on them. Some of it you deliberately don’t notice because you don’t want to.

Hurting yourself though? You pretty much know when you’re doing that. It’s not that we don’t do it, and it’s not that we never lie to ourselves about the fact that we are doing it, but most of the time when we do things that hurt ourselves we do so not out of ignorance but because we’ve made a conscious decision to do something painful. This isn’t always a good idea, but it is at least one made in relatively full possession of the facts.

By and large, we don’t like pain, and we’re reasonably good at avoiding it. As a result, I think it’s fair to say that the majority of us hurt other people at least as much as we hurt ourselves.

I think it’s also fair to say we don’t generally want to be hurting other people (setting aside people who enjoy being hurt and explicitly consent to it as a special case). If that’s not the case for you then… well. I’m not really sure what to suggest.

How do we stop hurting people? Or, at least, reduce how much we are hurting people.

The first step to is to understand when you’re doing it. The second step is to be able to predict whether a set of actions will do it.

That is to say, these are the skills of gathering evidence about the world and making predictions on the basis of that.

These skills are often lumped under the heading of “rationality”, or “empiricism”.

They are useful for bettering your life, but as previously mentioned you’re already awash in a sea of evidence about what causes harm or good in your life. It’s not that these skills aren’t useful here, but you’re certainly a lot closer to the point of diminishing returns than you are in cases of scarcer evidence and more complicated situations. i.e. other people.

This gives what I regard as the moral argument in favour of rationality:

It is easy to go through life being unable to accurately predict the consequences of your actions because you’ve got a rough and ready set of heuristics that mostly keep you out of harm’s way. To some degree you’re even actively encouraged to do so – ignorance really can be bliss, and understanding the world around you and being able to predict the effect of your actions will not necessarily make your life better. It may even make your life worse (more on that in another post). The prime reason to learn this skill is not to make you understand the consequences of your actions for yourself, but to understand the consequences of your actions for other people. It is a necessary skill if you want to understand how you affect the world and how to make it a better place for other people to live in.

Importantly, it also gives what I regard as the moral caveat to rationality:

You are causing at least as much harm to other people as you are to yourself.

You are developing skills which are more useful at understanding and predicting that harm for other people than they are for yourself.

Therefore, one of the consequences of improved rationality should be that you should be learning more about how your actions affect and hurt other people and how to make their lives better than you are about how to improve your own life.

If you find this is not the case, then what you are doing is not rationality, it is merely masturbation. It may feel good, and as long as it doesn’t become an unhealthy obsession there’s certainly nothing wrong with it, but it’s not exactly very productive is it?

This entry was posted in life, rambling nonsense on by .