# A sketch-design for a type of non-profit organisation

I found myself asking the following question: Supposing a group of people wanted to get together to spend money towards furthering a particular goal, how might they go about organising themselves? After some mulling this over I hit upon a design that I rather like. I suspect in practice it may have some problems, but it might be a nice starting point.

The specific case I wanted to solve is that people should be able to say “I am willing to spend up to this amount of money to further the goal” but not actually have to spend that amount of money until specific projects come along. Specific projects should then be able to be approved or denied by the group and funded from what people have committed. An example I had in mind was funding the creation and development of particular types of open source projects.

Here is the basic system I came up with:

There is a reserve pool of money held by a trusted member of the organisation. This is expected to be small compared to the total amount of money that has been committed to the cause. All funding managed by the organisation goes via this reserve – if the organisation decides to fund a cause you first pay into the reserve until the reserve has enough money, then you pay out to the project in question.

### Joining the organisation

In order to join the organisation you must pay a small fixed up front contribution to the reserve (I’d imagine this to be on the order of £5 – purely nominal) and declare a commitment of an amount of money you are prepared to pay towards the organisation’s aims.

### Growing the reserve

When more money than is available in the reserve needs to be spent we must grow it. This is performed as follows:

1. We fix a time period in which this must be achieved. A week would be normal.
2. The amount that needs to be added to the reserve is calculated as a fraction of the total amount of money committed.
3. Each member of the organisation must pay that fraction of their commitment, rounded up to the nearest whole currency unit (so if the amount to grow the reserve by is 1% of the total commitment, someone committing £100 must pay £1. Someone committing £110 must pay £2). Until they have paid this it counts as their debt to the reserve. Once they have paid this it is subtracted from their commitment (so if we committed £100 and pay £1 we now have £99 left committed).
4. If the time period elapses and we have not grown the reserve sufficiently due to non-payment from some members, we repeat the process for the remainder.
5. If the amount of money committed from people who have not failed to pay is now too small to grow the reserve sufficiently
6. Any non-payment from members remains as a debt which they should aim to pay into the reserve as soon as possible.

### Requests for funding

Anyone may request the organisation for funding. This consists of a specific proposal

Request for funding: Anyone, a member or not, can request the organisation for money. This consists of a proposal and an amount of money they would require for it. The organisation then votes on whether or not to accept the proposal.

1. Every member has a voting share. To calculate the voting share you first calculate the amount of money they have paid into the reserve, then you subtract the amount of money they have failed to pay into the reserve when required to do so. If the result is negative their voting share is zero, else their voting share is the square root of the result. (square root voting is a thing. It basically means people more invested in the system rightfully get more of a say in it, but prevents a small clique crowding out a large number of minority stakeholders). Note that money paid into the reserve includes the joining fee, so all members start with a vote.
2. Every member may choose whether or not to vote yes or not. An abstention is considered a vote no.
3. A proposal is approved if the yes votes total two thirds of the total voting share.
4. If the vote is successful, the reserve is grown to match the amount required and then the money is paid out. For the purposes of the reserve growing your commitment is the larger of what you had declared when the proposal was made and your current commitment, to stop people from backing out because they don’t like a specific idea.

#### Minority payment exception

It may be the case that a vote failed but there is enough money committed amongst those who voted yes that they could have paid for it themselves. In this case, they will fund it using only their funds. To do this:

1. Put aside a reserve which is a fraction of the main reserve equal to the fraction of the voting share that voted yes.
2. Grow this reserve from the yes-voters to meet the requirements, and pay out of that.
3. Any remaining money in it goes back in the original reserve.

Why not use the minority payment exception everywhere? It’s a good question. I think it increases variability of what the organisation can achieve quite a lot, and I’m not sure this is a good thing. It feels like having the minority payment option means that the organisation works less effectively as a whole. More importantly, it means that the default behaviour if you’re not really paying attention is to opt-out of paying any money, which seems unfortunate. It might be a good idea anyway.

### Implementation

This requires a moderate amount of book-keeping, but it’s not too much to do by hand/with a spreadsheet and email. Software could help, but it’s probably not necessary at small scales.

Obviously I’ve no experience in running non-profits, so it may be that I’m ignoring human factors / vastly overestimating how easy it is to get people to actually pay money they’ve precommitted to paying.

I think it might be an interesting approach though, and I have some things I’d quite like to try it for. I don’t know if I can actually overcome the annoyance of moving money around to do that, but we’ll see.

# Binary tournaments

One of the problems with the tournament model I’ve been discussing is it’s not a very realistic model of how tournaments are normally played.

A more normal tournament design is to split the set of players into two groups, let the two groups duke it out recursively and then play the winners.

You actually can model this in the tournament setup I’ve described (you draw the hierarchy then pick a pair with a lowest common ancestor), but it’s not very natural. This seems like it needs special treatment.

It also helps that there are much fewer such tournaments. You can canonically represent every such tournament as a list of players you recursively divide down the middle, so there are at most $$O(n!)$$ such tournaments. Given the bounds on the set of all tournaments this is actually a huge improvement.

I’ve pushed some initial code for such tournaments. I don’t currently know what the best way to optimise them is though. For small sets of candidates we can simply brute force try all the tournaments, but there should be a more efficient way.

One notable difference is that you can’t rig the tournaments quite so thoroughly: e.g. the strategy for boosting a candidate previously was to make everyone else fight before they reached this candidate, whileas now every candidate fights $$O(log(n))$$ other candidates to be the winner. In many cases this isn’t a massive problem because you can just pair the candidate against much weaker candidates, but it still helps balance things out a bit.

# The geometry and optimisation of tournaments

So in the type of game we’re playing (part 1, part 2), we play a binary tournament.

Formally, we have a set $$\mathcal{C} = \{1, \ldots, n\}$$. A tournament on these candidates is a function $$f$$ on the subsets of $$\mathcal{C}$$ of size $$> 2$$ such that $$f(A)$$ is a probability distribution over two-element subsets of $$A$$. Let $$\mathcal{T}(X)$$ be the set of tournaments on $$X$$ (the set of tournaments on a set of fewer than two elements is defined but empty)

Given tournaments $$t_1, \ldots, t_n$$ we can form a convex combination of them by playing tournament $$i$$ with probability $$p_i$$ at each step. Some inspection shows that every tournament is a convex combination of (potentially rather a lot of) deterministic tournaments – that is, tournaments such that at each step they produce a single pair with probability 1.

How many such deterministic tournaments are there?

Well, there are $$n \choose k$$ subsets of size $$k$$. Each of these then has $$k \choose 2$$ possibilities to map to. So there are

$$\tau(n) = \prod\limits_{k=3}^n {k \choose 2}^{n \choose k}$$

This is O(pretty hardcore). $$\tau(3) = 3$$, which is fine, but $$\tau(4) = 486$$ and $$\tau(5) = 4591650240$$. I’ve not done precise bounds calculations, but it’s at least $$O(2^{2^n}$$.

Anyway, the point being that this space is bloody large. Although convex and linear minimizations are a thing that we can do relatively easily, this space is so infeasibly large that this becomes not an option. You probably can run the simplex algorithm on a 4591650240 dimensional space, but it’s going to take a long time

So why did I think I could do optimisations on it?

Well for some functions, you can. Unfortunately this turns out to be a subset of the functions I used the relevant algorithm for. Basically the question is whether you can optimise a tournament based on optimising its subtournaments. If you can then this is “easy” in the sense of only having to perform $$O(2^n)$$ optimisations on $$O(n)$$ dimensional spaces. Compared to our above super-exponential function this isn’t so bad at all.

Let $$t$$ be a tournament and let $$A \subseteq C$$. Through a slight abuse of notation, define the restriction $$t|_A$$ to be the restriction of $$t$$ to subsets of $$A$$.

Let $$V$$ be a victory distribution. That is, $$V : \mathcal{C}^2 \to \mathbb{R}$$ with $$0 \leq V(i, j) \leq 1$$ and $$V(i, j) = 1 – V(j, i)$$. $$V(i, j)$$ is the probability that $$i$$ beats $$j$$.

We define the collapse of a tournament $$t$$ with respect to $$V$$, which we will write $$c(t, V)$$, to be the probability distribution on $$\mathcal{C}$$ you get by playing the tournament with this victory function and returning the winner: That is, at each stage from the remaining candidates you use the tournament to pick a pair and the victory function to decide which one of that pair drops out.

Theorem: Let $$f : \bigcup\limits_{A \subseteq \mathcal{C}} \mathcal{T}(A) \to \mathbb{R}$$. Suppose there exists a victory function $$V$$ and a linear function $$g : \mathcal{L}(\mathcal{C}) \to \mathbb{R}$$ (where $$\mathcal{L}(A)$$ is the set of random variables taking values in $$A$$) such that $$f(t) = g(c(t, V))$$. That is, $$f$$ is $$g$$ applied to the victory distribution. Then $$f|_{\mathcal{T}(\mathcal{C})}$$ is maximized by a tournament $$t$$ such that for every $$A \subseteq T$$, $$t|_\mathcal{A}$$ also maximizes $$f_{\mathcal{T}(A)}$$. Further, every tournament maximizing $$f_{\mathcal{T}(A)}$$ is the restriction of one which maximizes $$f|_{\mathcal{T}(\mathcal{C})}$$. Further, these maxima or minima are achieved by a deterministic tournament.

Proof:

This is because $$c(t|_A, V)$$ is a non-negative linear combination of $$\{ c(t|_{A \setminus \{x\}}, V) : x \in A\}$$ and thus since $$g$$ is linear, $$f(t_A)$$ is a non-negative linear combination of $$\{f|_{A \setminus \{x\}} : x \in A\}$$. Thus any increase or decrease in these corresponds to an increase or decrease in $$f(t_A)$$. Further, because $$f(t_A)$$ is only a function of the $$f(t_{A \setminus \{x\}})$$, any maxima/minima will work.

Why can we pick a deterministic tournament? Well because linear functions take their extreme values on extreme points, and the extreme points are the deterministic tournaments.

QED

In particular this works absolutely fine for finding the best and worst victory probabilities for a given candidate, because these are simple linear functions of the victory distribution (indeed they’re just coordinate functions). So those numbers I quoted last time are perfectly fine.

However the entropy numbers I quoted… might be the most extreme due to the extreme simplicity of the example I used, but in general adopting this approach to finding entropy extreme is completely bogus. Because entropy is concave the entropy maximization approach will probably work pretty well (because the entropy is greater than the entropy of the subtournaments). It might even find the maximum entropy deterministic tournament, I’m not entirely sure, but it will not in general find the maximum.

# Lets play a game

Edit: I’ve noticed that my optimisation functions, well, don’t actually optimise. They successfully produce a somewhat optimal value of the function they’re optimising and produce valid tournaments, but in order to produce a global optimum they make certain assumptions about the shape of the function they’re optimising that are not valid. So take any claims of optimality in this post with a pinch of salt.

The game we’re going to play isn’t very fun. But that’s OK, because it’s not actually going to be us playing it. This is a very simple game that is designed as a starting point for exploring some of the properties of the tournament rigging library I’m working on.

The game we’re going to play is good-card bad-card. Each player has a deck. This deck has two types of cards in it: Good and bad.

Play proceeds as follows: Each player draws a card. If one of them draws a good card and the other draws a bad card, the player with the good card wins. Else if they both have the same card, they shuffle their cards back into their respective decks and play another round. They do this until someone wins a round.

The interesting feature of the game comes from the fact that different players have different fractions of good cards in their hands (every player has at least one good card and at least one bad card, preventing the possibility that both of them will be sitting there endlessly drawing the same card). This means that in any given situation one player may have a clear advantage.

Suppose we have players $$0, \ldots, n$$, and suppose player $$i$$ has a fraction of good cards $$g_i$$. Then the probability of player $$i$$ winning a round against player $$j$$ is $$g_i – g_i g_j$$ (because this is the probability of $$i$$ drawing a good card minus the probability of them both drawing a good card). This means the probability of $$i$$ winning the whole game is $$\frac{g_i – g_i g_j}{g_i + g_j – 2 g_i g_j}$$ (because the probability of winning the game is proportional to the probability of winning any given round).

Lets do a worked example of what this looks like:

In [1]: import tournament as t In [2]: import numpy as np In [3]: x = x = np.array([ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) In [4]: Ax = t.good_card_bad_card(x) In [5]: t.victory_probabilities(Ax, t.even_selector) Out[6]: array([ 0.00128291, 0.00392595, 0.00865896, 0.01682629, 0.03099347, 0.05650568, 0.10608786, 0.21727215, 0.55844674])

This is the probability of victory when each pair of contenders is picked at random from the set of remaining pairs. In this particular model of tournaments (where there’s no state to pair off people evenly) this seems the “fairest” and, unsurprisingly, the better your score the better your chances of winning. To almost a surprising degree really: The deck with 90% good cards is more than twice as likely to beat the deck with 80% good cards, despite the fact that in a head to head fight it only has a 69% chance of winning.

Two other… I won’t say obvious, but at least somewhat natural, choices are to maximize and minimize the entropy of the results. Maximizing the entropy could be regarded as keeping the competition exciting, minimizing could be regarded as trying to highlight the actually best candidate.

In [7]: t.victory_probabilities(Ax, t.max_entropy_selector) Out[7]: array([ 0.10300318, 0.10353773, 0.08934311, 0.07525026, 0.06495158, 0.05973063, 0.06176968, 0.08015912, 0.36225472])   In [8]: t.victory_probabilities(Ax, t.min_entropy_selector) Out[8]: array([ 0.00162647, 0.00362818, 0.0052852 , 0.00680589, 0.00863982, 0.01165419, 0.01830958, 0.08015912, 0.86389154])

The max entropy tournament is… weird. It results in the lower scoring items being more likely to win than the mid scoring items! Why does that happen? Well, basically because even competitions are higher entropy than unbalanced ones (entropy is approximately “how much information does knowing the result of this event give me on average?”. When an event is a foregone conclusion knowing its outcome gives you very little information).

An interesting thing is how these relate to the tournaments which are rigged for or against the top candidate:

In [9]: t.victory_probabilities(Ax, t.boosting_selector(8)) Out[9]: array([ 0.00162647, 0.00362818, 0.0052852 , 0.00680589, 0.00863982, 0.01165419, 0.01830958, 0.08015912, 0.86389154])   In [10]: t.victory_probabilities(Ax, t.killing_selector(8)) Out[10]: array([ 0.00178168, 0.01591364, 0.01091917, 0.03665989, 0.06297012, 0.11254326, 0.13593336, 0.26102417, 0.36225472])

Interesting to note is that the probability of the top candidate winning is the best possible under the minimum entropy tournament and the worst possible under the maximum entropy tournament. This isn’t terribly surprising given what they’re doing: In the minimum entropy tournament it pays to keep the top candidate around. In the max entropy tournament it pays to kill them off.

A thing in there I’m a little suspicious of is that the min entropy tournament appears to correspond exactly to the tournament designed to kill off the top candidate. This is suspicious because when the top candidate has been eliminated this tournament falls back to even selection, which is not the mininum entropy solution in general. I think this is a bug in my entropy minimization/maximization code which I’m going to have to think about how to solve – the problem is likely that while it correctly chooses the single pair that will optimize the entropy of the child, that’s not actually necessarily the best way to optimize the overall entropy because we can choose a distribution over tournaments. For minimization I think we want to always pick a single tournament, and for maximization I think we might want to pick a more complicated distribution over tournaments. So take this section with a grain of salt: These are still valid tournaments with low and high entropies respectively, but they may not be the global minima and maxima.

One thing that does seem to be correct is if we look at the entropies of all the tournaments designed to boost a single candidate, the entropy decreases as the score of the candidate increases:

In [11]: np.array([t.entropy(t.victory_probabilities(Ax, t.boosting_selector(i))) for i in xrange(9)]) Out[11]: array([ 1.96741165, 1.89865646, 1.80417052, 1.69265673, 1.56505675, 1.41703054, 1.23658575, 0.99217258, 0.5873777 ])

Lets take a look at what the best and worst probabilities for each candidate are:

In[12]: np.array([t.victory_probabilities(Ax, t.boosting_selector(i))[i] for i in xrange(9)]) Out[12]: array([ 0.10300318, 0.18244198, 0.24852718, 0.30746763, 0.36347069, 0.4205425 , 0.48460246, 0.57025324, 0.86389154]) In [13]: np.array([t.victory_probabilities(Ax, t.killing_selector(i))[i] for i in xrange(9)]) Out[13]: array([ 9.35042740e-10, 3.05782789e-07, 8.93073705e-06, 1.02246579e-04, 7.25760000e-04, 3.93070198e-03, 1.83095795e-02, 8.01591233e-02, 3.62254715e-01])

So unsurprisingly these are still increasing with score. Further, your worst candidate can never be that likely to win, and your best candidate is never that likely to lose: The worst victory probability for the top candidate is still more than three times better than the best victory probability for the worst candidate.

I’m not really sure where I’m going with this, except to say “yep, for this very simple game tournaments mostly align with our intuition”. For this simple game with a straight linear ordering of the candidates in terms of quality, you can distort the results quite far, but not impossibly far.

Next I’m going to look at some more interesting games where you can have non-transitive victory probabilities. I’m not yet sure, but I suspect you can get much more interesting behaviour there.

# Consider the tournament

Due to reasons, I found myself thinking about the following problem:

We have a two-player game with players $$1, \ldots, n$$. Player $$i$$ beats player $$j$$ with probability $$A_{ij}$$ ( so $$A_{ij} = 1 – A_{ji}$$).

We wish to run a tournament with these players. The tournament is played as a sequence of games between pairs. The loser of the two drops out, and we choose another pair from the remainder until there is only one left (assume that a player winning any game they participate in is not affected by previous games they’ve played).

My question is this: How much can we influence the result of the tournament by changing which pairs play against eachother?

The short answer is “a lot”. There are clear upper and lower bounds on a given player winning (the upper bound on player $$i$$ winning is $$\max\limits_{j} A_{ij}$$ – they don’t play until the very end and then play against the player they are most likely to beat – while the lower bound is $$\prod\limits_{j} A_{ij}$$ – they play against every other player\), but there’s a lot of variation in there and these bounds won’t be achievable for all tournaments.

So how to resolve this question? Well, I wrote some code. Have ye a nice little python module for calculating how to rig tournaments.

(Well, it doesn’t actually tell you how to rig tournaments, though it could easily be modified to do so. Instead it tells you the probability of victory under various strategies. Some of those strategies are “recursively maximize this function”).

I’m going to explore this more later, but if I write any more on this right now I’ll be late to work, so in the meantime this is just a “Here’s some code! If you’re interested in this question, feel free to play around with it”.