Category Archives: Uncategorized

How to write good software

I have a thesis on how to write good software that I would like to persuade you of. It’s not an easy process to follow – indeed, much to my shame, I have never successfully followed it, but I think that if more people tried to follow it it would gradually become easier to achieve, and at the end of it what we would have would be a much better software ecosystem.

It’s a bit of a complicated thesis though, so I’m going to have to ease you into it. Lets start with two hopefully uncontroversial premises:

  1. Anything you do that you did not set out to achieve is achieved by accident
  2. You are more likely to achieve things if you do them deliberately than you are if you do them accidentally

In particular, if writing good software is not your goal then it is something that happens by accident. Thus if writing good software is not your goal, you are less likely to write good software.

This seems fairly plausible as a starting point.

But now I would like you to ask yourself: When waas the last time that writing good software was your goal?

Thought of it? OK.

Now I’d like to ask you this:

Was it really your goal?

Consider the following three scenarios:

  1. I could spend time working on making our software more reliable for our existing users, or I could spend time exploring this cool new feature that adds way more valuable IP and will probably add like 2% to our acquisition price.
  2. I could spend time on making our software more reliable for our existing users, but actually our existing users are pretty much a captive audience, but if I add this feature we can capture 10% of the market from our competitor software.
  3. I could spend time on making our software more reliable for our existing users, but actually I’ve written just about all the papers I can on this idea, so if I want to have something to write about I’ll have to go do something else.

Which branch of these decisions do you think people are likely to take? Which branch do you think would produce better software?

Sure, sometimes adding that feature will produce better software because it will radically change the utility of your system, but if you’re making these decisions over and over again, you’re probably going to be producing some pretty awful software.

Sure you might want to produce better software, but ultimately that isn’t your goal. Or at least it’s not your organisation’s goal. You can subvert the organisation’s goal to a certain degree, but if you subvert it too far then you’re not doing your job and you get fired. There is room for a certain amount of individual subversion, but as a group it only goes so far before the corporate overmind realises that its drone are deviating the company realises one of its teams are off-mission and they step in to do something about it.

A long time ago when I was young and naive and working for a company I shall not name I had a conversation with a colleague I shall refer to as B. It went roughly as follows:

Me: We write really buggy software. We should probably not do that.

B: Why?

Me: Well, bugs are bad. They make users unhappy.

B: Great. That means they’ll pay us to fix the bugs.

To the best of my knowledge this was not an official company policy, but it was still remarkably educational in terms of how incentives work.

This is the thing: When your goal is to make money, you will generally speaking do the thing that causes you to make money. While the idea of a corporate obligation to maximize profits is nonsense, nevertheless generally speaking behaviours which cause you to make more money will be tolerated and behaviours which cause you to make less won’t. You can add other constraints, such as morality and good taste, but generally those constraints will be rubbed away at the edges by the structure of your incentives.

Further, money breeds money – by increasing your wealth you increase your capabilities, which increases your ability to make more money. So by constraining yourself you are in danger of ceding the playing field to someone who hasn’t – if they decide they want your share of the market they will usually out-compete you by being able to throw money at the problem.

Sometimes this is not harmful to the goal of producing good software. It can be the case that the best way to make money is to write good software.

But normally the best way to make money is to write good-ish software. You know that thing where getting 80% of the way there takes 80% of the work and then the remaining 20% takes the other 80% of the work? Well, it turns out that users are probably not going to pay twice as much for something which is only 25% better (100% / 80% = 125%). Especially now that we’ve been doing this for decades and have conditioned them to believe that this is just what software is like. And investors and acquiring companies? They literally couldn’t care less. You think they use your software? Nah. They just care about the IP, the talent, and the data you’ve built up. Turns out none of that relies on quality software.

“But David!”, you cry. “What about open source? Doesn’t open source solve these problems? Can’t we write quality open source software?”

So first off, it’s worth noting that regardless of whether we can write quality open source software, the reality seems to be that we don’t. Consider Linux on the desktop vs OS X or even Windows. Sure, I use and sorta like it, but I don’t think I could keep a straight face if I tried to tell you it was higher quality.

But secondly… this is one of the major misconceptions of open source. You know what a successful large open source project looks like? It looks like one where a great deal of money has been spent on its development. Successful large scale software development needs a whole bunch of people spending a lot of time working on it, and it turns out that people need to eat. And it turns out that in order to eat you generally need money. So open source development succeeds not as a way for individuals to collaborate on making quality software, but as a way for corporations to collaborate on mutually making more profits. This makes the incentive structure of open source remarkably similar to the incentive structure of  proprietary software.

There’s of course a second type of open source – the small single (or at most several) person project which does one very specific thing. Most commonly exemplified in the ecosystem of libraries that most programming languages have.

Given the average quality of those ecosystems, we’re still not producing much evidence that we can write good software this way.

The thing is, in order for this to result in good software, the author must:

  1. Have enough free time to work on it
  2. Care enough about the problem that they will devote as much of their free time as they need to to work on it
  3. Actually want to write quality software rather than just software which scratches their own itch
  4. Actually be good at writing quality software
  5. Understand the user base of their software well enough that their idea of quality lines up with their users’ ideas of quality

It definitely happens. There are some pretty good libraries out there maintained by one or two developers apiece. But these people are basically unicorns, and relying on an adequate supply of unicorns is probably a sign that your approach is in trouble.

And even if we could solve this problem it still wouldn’t be good enough, because most problems are too large to be solved by individuals and need teams working on it.

So how do you write good software? You get together teams of talented people, you give them the capability to work on it without worrying about where their next meal is from, and you give them the goal of producing good software. You do this without them having to worry about their project drying up due to lack of funding, or the source of their funding telling them yes it’s very nice that they want things to work but maybe could they make them work now?

So, now that we’ve smashed the capitalist structure of dominance done away with the corrupting influence of financial incentives produced an environment in which people can focus on the task of writing quality software, are we done?

Oh, if only.

Lets recall a line I slipped into a bullet point list up above. In order to write quality software you need to “Understand the user base of their software well enough that [your] idea of quality lines up with [your] users’ ideas of quality”.

How well do you think that currently happens?

The answer is that it happens in exactly one case: When your users are exactly like you. Sure, you can, do and should talk to your users and find out what they’re like, and in the course of doing so you may educate yourself into seeing through their eyes to some degree, but ultimately your vision of what quality software looks like is deeply tied in with your view of the world.

One of the core features of the feminist concept of privilege is that your view of the world is intrinsically shaped by who you are and what you’ve experienced. If you’ve not experienced a problem, or don’t know anyone who has experienced that problem, then you probably don’t see that problem as acutely as those who have (and may not see it at all). This is how we end up with software that only works in certain time zones, or doesn’t handle foreign languages, or imposes a real name policy, or asks for your gender as “Male or Female?”. The authors didn’t experience those problems, so didn’t think they mattered.

(This is also a function of financial incentives: e.g. If < 1% of our users are non-binary and we can save effort on data modelling and do a bunch of “charming” gendered things for ad targeting if we just ask male/female, which way does the path of more money lie? Unicode is hard, and if we’re not interested in the foreign market, which way does the money lie? But we’ve already solved that problem, right?)

As we’re fond of smugly declaring as an industry, software is eating the world. More and more things are getting automated, and more and more automation is going to be just a function of software.

What does this mean? It means that our user base is everyone. Even if they’re not using our software, they’re using things that are using our software, or they’re using things that will be “disrupted” by our software.

Good software is software that makes lives better. If your software is making things worse it doesn’t matter how solid an execution it is. You may have written the best spyware engine in the world, but I’m not going to think well of you for it and I’m not going to call your software good.

So in order to write good software you need to understand your users. And your users are everyone.

So what do you do?

Well, I’ll tell you what you don’t do. You don’t assemble your team of software developers from a group that looks remarkably similar to the top end of the existing structures of social dominance. Because if you do then what you are developing is software that solves peoples’ problems in direct proportion to how well society already solves their problems.

Good software is not software which reinforces the entrenched power structure.

Look around your software company. If the team is any like the ones I’ve worked in, it’s mostly white, mostly male, mostly middle class. If you’re lucky you’re working in a team which is at least not perpetuating this problem, but the problem is so endemic in our entire industry that right now software absolutely is mostly written by the people who are socially more or less at the top (and mostly in service to the people who are actually at the top).

So in order to produce good software we need to stop doing that. Good software is produced by teams who represent the people they’re building software for.

So now that we’ve formed a socialist utopia removed the financial incentives that prevent writing good software and smashed the patriarchy solved the fundamental lack of representation in our industry, can we write good software?

Well, maybe. At the very least, now we can start to learn how.

This entry was posted in Uncategorized on by .

Reevaluating some testing philosophy

Over the past year or so I’ve started to have serious doubts about some of my previous attitudes on testing. I still think it’s good, I still don’t particularly believe in TDD, but I also think some of my previous approaches and opinions are a bit misguided.

This is about one I just encountered today which is causing me to re-evaluate some things. I previously strongly held the two following opinions:

  1. You should test to the public API rather than having your tests depending on internals.
  2. Randomized quickcheck style testing is awesome. Although you probably want to turn failing quickcheck tests into deterministic tests, and sometimes it’s worth writing tests that are hard to write in this manner, quickcheck will probably do a better job of testing your code than you will.

These two stances turn out to be in conflict. It’s not impossible to reconcile them, but it requires a significant amount of work. I think that amount of work might be worth doing because it makes your code better, but it’s worth keeping an eye on.

As I’ve mentioned previously, my testing strategy for intmap is as follows:

  1. I have a randomized test case generator that models the behaviour the library should have and generates test cases to check that it does.
  2. I have a set of deterministic tests that run before the randomized ones. The main benefit of these is they’re reproducible and a hell of a lot faster. They’re mostly extracted from failing randomized test cases.

Today I was writing some exceedingly trivial performance tests so that I could do some profiling and testing of whether some of the optimisations I was performing were actually wins (at least in these benchmarks the answer is that some of them really are, some of them really aren’t). Then I wrote a new benchmark and it segfaulted. Given that the test coverage was supposed to be pretty comprehensive, and the test suite was passing, this was pretty disappointing.

How did this happen?

Well the proximate cause of the segfault was that my allocator code was super buggy because I’ve never written an allocator before and it turns out that writing allocators is harder than I thought. If you’re interested in the specifics, here is the commit that fixes it. But why didn’t the tests catch it?

Randomized testing essentially relies on two things in order to be work.

  1. It has a not-tiny probability of triggering any given bug
  2. It runs enough times that that decent probability is inflated to a significant probability of triggering it.

What does this end probability of triggering a bug look like? Lets do some maths!

Define:

  • q(b) is the probability of a single run triggering bug b
  • t is the average amount of time it takes for a run
  • T is the total amount of time we have to devote to running randomized tests
  • p(b) is the probability of a set of test runs finding the bug.

If we have T time and each run takes t, then we make approximately \(\frac{T}{t}\) runs (this isn’t really right, but assume low-variance in the length of each run). Then the probability of finding this bug is \(p(b) = 1 – (1 – q(b))^{\frac{T}{t}} \approx q(b) \frac{T}{t}\).

This formula basically shows the two reasons why testing only the public API is sometimes going to produce much worse results than testing internal APIs.

The first is simple: In most cases (and very much in this one), testing the public API is going to be slower than just testing the internal API. Why? Well, because it’s doing a lot of stuff that isn’t in the internal API. Not doing stuff is faster than doing stuff. If the difference isn’t huge, this doesn’t matter, but in my case I was doing a phenomenal amount of book-keeping and unrelated stuff, so the number of iterations I was performing was much lower than it would have been if I’d just been testing the allocator directly.

The second is somewhat more subtle: Testing the public API may substantially reduce \(q(b)\). If it reduces it to 0 and your testing coverage is good enough that it can trigger any conceivable usage of the public API, who cares. A bug in the internal API that can never be triggered by the public API is a non-issue. The danger case is when it reduces it to small enough that it probably won’t be caught in your testing, because things which aren’t caught in testing but are not impossible will almost certainly happen in production – in the most harmless case, your users will basically fuzz-test your API for you by throwing data you never expected at it, in the most harmful case your users are actively adversarial and are looking for exploits.

How does this happen?

Essentially it happens because q(b) is intimately depending on both b and the shape of your distribution. The space of all valid examples is effectively infinite (in reality it’s limited by computer memory, but it’s conceptually infinite), which means that it’s impossible to have a uniform distribution, which means that your distribution is going to be peaky – it’s going to cluster around certain smaller regions with high probability.

In fact, this peakiness is not just inevitable but it’s desirable, because some regions are going to be more buggy than others: If you’ve tested what happens on a few thousand random integers between 0 and a billion, testing a few thousand more random ones is probably not going to be very useful. But you probably want to make sure you test 0 and 1 too, because they’re boundary cases and thus more likely trigger bugs.

So this is what you basically want to do with randomized testing: Arrange it so that you have lots of peaks in places that are more likely to trigger bugs. Most randomized testing does this by basically just generating tests that cluster around edge cases with relatively high probability.

The problem is that edge cases in your public API don’t necessarily translate into edge cases in your private API. In my case, I was doing lots of intmaps unions and intersections, which is really good for triggering edge cases in the basic intmap logic, but this was mostly just translating into really very dull uses of the allocator – it rarely created a new pool and mostly just shuffled stuff back and forth from the free list.

If I had been testing the allocator directly then I would have tuned a test generator that exercised it more thoroughly – by not restricting myself to the sort of allocation patterns I can easily generate from intmaps I would have found these interesting bugs much sooner.

In the short-term I’ve solved this by simply writing some deterministic tests for exercising the allocator a bit better.

In the long-term though I think the solution is clear: the allocator needs to be treated in every way as it if it were a public API. It may not really be public – its intent and optimisations are so tailored I’m not expecting it to be useful to anyone else – but any bugs lurking in it are going to eventually make their way into the public API, and if I don’t test it directly hard to trigger ones are just going to lurk undiscovered until the worst possible moment.

Fortunately I’d already factored out the pool code into its own thing. I hadn’t done this for any especially compelling reasons – it’s just the code was already getting quite long and I wanted to break it up into separate files – but it’s going to be very useful now. Because this is the sort of thing you need to do in order to reconcile my original two beliefs: factor any code you want to test on its own out into its own library. This is generally a good design principle anyway.

Does this mean that the two principles are compatible after all as long as you’re writing good code in the first place? Well… kinda. But only if you define “good code” as “code that doesn’t have any internal only APIs”. At this point the first principle is satisfied vacuously – you’re not testing your internal APIs because you don’t have any. I’m not sure that’s wrong, but it feels a bit extreme, and I think it only works because I’ve changed the definition of what an internal API looks like.

This entry was posted in Uncategorized on by .

New project: IntMap implementation in C

So I’ve been hacking on a new project recently. It’s an implementation of “Fast Mergeable Integer Maps” (known to Haskell and Scala programmers as IntMap) in C. The implementation is available on Github under a 2-clause BSD license. I’ve not done any work on packaging it yet, mostly because I’ve no idea how to package a C library (I suspect the answer is something like sacrifice a goat to acquire knowledge of how autotools works).

It’s currently pretty minimal. The core of the API is there with support for lookup, insert, union and enumeration, but I’ve put some thought into how it looks and what’s there should be pretty stable in terms of keeping the same API around.

Developing it has been… interesting. It’s rather shaken my confidence in my ability to write non-trivial C programs, truth be told. I’ve made pretty much every mistake I possibly could have – it’s mostly been trivial stuff (using the wrong variable, getting checks the wrong way around, etc) with a few interesting things (getting pointer addition wrong due to incorrect pointer types), but my testing process has been full of segfaults.

In the end I’m pretty confident I’ve caught most of the bugs in it, due to the testing process I’ve adopted (more on that later), but I’m a little worried about the error density – C isn’t a language that is forgiving of programmer error and apparently I rely a lot more on that in my normal programming than I’d previously been aware. I think clearly if I want to be writing C code on a regular basis I’m going to have to up my testing game if nothing else.

Part of the problem here is that this library ends up emulating both garbage collection (through reference counting) and pattern matching (through unions and a type tag) in C, both of which are fiddly and error prone to do, and even in Haskell the algorithms used are conceptually clear but a little tricky to implement correctly. All this adds up to a pile of edge cases that are easy to get wrong, and apparently I wasn’t so good at not getting them wrong.

So I turned to my standard tool for getting data-structures correct when there are lots of fiddly edge cases: QuickCheck style testing. Initially I did this using python bindings to intmap and hypothesis. The problem with doing so I found was that  the result is really quite slow so it was difficult to get thorough enough coverage. This was especially so under valgrind (python it turns out does not run well under valgrind). I could have solved this the same way I did in pyjq by generating a C program for each test case, but it wasn’t going to interact well with hypothesis and doesn’t solve the slowness problem.

So what do? Well I thought about using quickcheck style testing in C – there is an existing C quickcheck implementation but frankly that sounds pretty hellish.

So I ended up with a different approach. Rather than generating random data I generate random programs which should all be valid uses of the library. The essential idea of it is to generate a collection of intmaps, with a parallel implementation in python that allows to determine what should be in them. This then allows us to test a wide variety of assertions based on what we know should be in there.

The generated programs aren’t pretty. Here’s an example. However even an individual one produces pretty good coverage, and randomly generating 1000 of them should be more than enough to find most bugs. I’m sure there are bits it misses (for example it’s not currently very good at testing all the branches of equality – it in principle can do so, but I think some of the edge cases are too low probability), and it’s not impossible that there are still some subtle ones lurking in there, but since writing this I’m an awful lot more confident about the quality of this code than I was before I wrote this program generator.

It’s currently still quite slow, but that’s because it’s doing a thousand runs of compiling 4000 line C programs (twice – one in debug, the other not) and running each of those programs (twice – once in valgrind, the other not). With all that work, it should hopefully be reasonably good at making up for my failings as a C programmer.

This entry was posted in Uncategorized on by .

Experimenting with python bindings to jq

I’ve been hacking on a little side project this weekend. It’s still very much a work in progress, but I’d like to tell you a bit about the process of making it, because it’s always fun to show off your baby even if it’s an ugly one. Also, some of the implementation details may be of independent interest.

What is it? It’s a set of bindings for calling jq from python. I’m a huge fan of jq, I quite like python, and I thought I’d see if I could get my two friends to be buddies. Well, their friendship is still a pretty tentative one, but they seem to be getting along.

Unfortunately I discovered amidst writing this blog post that a similar thing already exists. It takes quite a different approach, and I think I prefer mine (although I confess the cython does look rather nicer than my C binding). At any rate, I learned a moderate amount about jq and writing more complicated C bindings by writing this, so I’m perfectly OK with having done the work anyway.

This is especially true because my actual initial interest was in seeing if I could use jq inside of postgres, but as I’m unfamiliar with both the jq internals and writing postgres C extensions I figured that something that would allow me to familiarise myself with one at a time might be helpful. I’ve been writing a bunch of python code that used ctypes to call C libraries I’ve written recently, so this seemed like a natural choice.

I initially prototyped this by just calling the executable. This worked a treat and took me about 10 minutes. Binding to the C library took… longer.

The problem is that libjq isn’t really designed as a library. It’s essentially the undocumented extracted internals of a command line program. As such, this is more or less how I’ve had to treat it. In particular my primary entry point essentially emulates the command line program.

There are three interesting files in my bindings:

  • pyjq/__init__.py contains the high level API for accessing jq from python. Basically you can define filters which transform iterables and that’s it.
  • pyjq/binding.py contains the lower level bindings to jq
  • pyjq/compat.c is where a lot of the real work is. I’ve essentially created a wrapper that emulates the jq command line utility’s main function.

The basic implementation strategy is this:

We define a jq_compat object which maintains a jq_state plus a bunch of other state necessary for the binding. We’re essentially defining a complete wrapper API in C which we then write as light as possible an interaction with in python.

Among the things the compat object is responsible for is maintaining two growable buffers, output and error. These are (slightly patched) vpools which are used in a way that looks suspiciously like stdout and stderr. When the binding emits data it writes lines of JSON to its output buffer. When errors occur it writes error messages to its error buffer (It also sets a flag saying that an error has occurred).

There isn’t a vpool for stdin – I initially had one but then realised that it was completely unnecessary and took it out. In the jq CLI what it essentially does is loop over stdin in blocks and feed them to a parser instance, which may then write to stdout and stderr. In the binding we invert the control of this: We maintain a parser on the compat object which we feed with the method jq_compat_write (unlike in the jq CLI we always write complete blocks to it). This then causes it to immediately immediately parse input values, which get passed to the jq_state. We then check to see if this causes the state to emit any objects and if so write them to the output buffer.

On the python side what happens is we call jq.write(some_object). This gets serialised to JSON (I’d like to provide a more compact representation for passing between the python and C sides of the binding, but given that jq is primarily about JSON and the python JSON library is convenient, this seemed a sensible choice). This is written to the output buffer with newlines separating it.

From the python side when we want to iterate we first read everything pending from the compat’s output buffer. We split this by lines, JSON serialise it and put it in a FIFO queue implemented as a simple rotating array (this is so that we can resume iteration if we break midway through).

That’s pretty much it from an implementation point of view, but there’s one clever (and slightly silly) thing that I’d like to mention: The bindings allow you to intercept calls to the C library and generate a C program from them. This is used by the test framework. First the python tests run. As part of this they generate a C program which does the same sequence of C calls. This is then compiled and run under valgrind. Why? Well, mostly because it makes it much easier to isolate errors – I’ve never had much luck with running python under valgrind, even with suppressions, and this circumvents that. It also helps to confirm if the problem is caused crossing the python/C boundary.

I don’t know if this is a project I’m going to continue with – I don’t know if it’s especially useful, even without the existing bindings, but it’s been an interesting experiment.

This entry was posted in Uncategorized on by .

A sketch-design for a type of non-profit organisation

I found myself asking the following question: Supposing a group of people wanted to get together to spend money towards furthering a particular goal, how might they go about organising themselves? After some mulling this over I hit upon a design that I rather like. I suspect in practice it may have some problems, but it might be a nice starting point.

The specific case I wanted to solve is that people should be able to say “I am willing to spend up to this amount of money to further the goal” but not actually have to spend that amount of money until specific projects come along. Specific projects should then be able to be approved or denied by the group and funded from what people have committed. An example I had in mind was funding the creation and development of particular types of open source projects.

Here is the basic system I came up with:

There is a reserve pool of money held by a trusted member of the organisation. This is expected to be small compared to the total amount of money that has been committed to the cause. All funding managed by the organisation goes via this reserve – if the organisation decides to fund a cause you first pay into the reserve until the reserve has enough money, then you pay out to the project in question.

Joining the organisation

In order to join the organisation you must pay a small fixed up front contribution to the reserve (I’d imagine this to be on the order of £5 – purely nominal) and declare a commitment of an amount of money you are prepared to pay towards the organisation’s aims.

Growing the reserve

When more money than is available in the reserve needs to be spent we must grow it. This is performed as follows:

  1. We fix a time period in which this must be achieved. A week would be normal.
  2. The amount that needs to be added to the reserve is calculated as a fraction of the total amount of money committed.
  3. Each member of the organisation must pay that fraction of their commitment, rounded up to the nearest whole currency unit (so if the amount to grow the reserve by is 1% of the total commitment, someone committing £100 must pay £1. Someone committing £110 must pay £2). Until they have paid this it counts as their debt to the reserve. Once they have paid this it is subtracted from their commitment (so if we committed £100 and pay £1 we now have £99 left committed).
  4. If the time period elapses and we have not grown the reserve sufficiently due to non-payment from some members, we repeat the process for the remainder.
  5. If the amount of money committed from people who have not failed to pay is now too small to grow the reserve sufficiently
  6. Any non-payment from members remains as a debt which they should aim to pay into the reserve as soon as possible.

Requests for funding

Anyone may request the organisation for funding. This consists of a specific proposal

Request for funding: Anyone, a member or not, can request the organisation for money. This consists of a proposal and an amount of money they would require for it. The organisation then votes on whether or not to accept the proposal.

Votes are performed as follows:

  1. Every member has a voting share. To calculate the voting share you first calculate the amount of money they have paid into the reserve, then you subtract the amount of money they have failed to pay into the reserve when required to do so. If the result is negative their voting share is zero, else their voting share is the square root of the result. (square root voting is a thing. It basically means people more invested in the system rightfully get more of a say in it, but prevents a small clique crowding out a large number of minority stakeholders). Note that money paid into the reserve includes the joining fee, so all members start with a vote.
  2. Every member may choose whether or not to vote yes or not. An abstention is considered a vote no.
  3. A proposal is approved if the yes votes total two thirds of the total voting share.
  4. If the vote is successful, the reserve is grown to match the amount required and then the money is paid out. For the purposes of the reserve growing your commitment is the larger of what you had declared when the proposal was made and your current commitment, to stop people from backing out because they don’t like a specific idea.

Minority payment exception

It may be the case that a vote failed but there is enough money committed amongst those who voted yes that they could have paid for it themselves. In this case, they will fund it using only their funds. To do this:

  1. Put aside a reserve which is a fraction of the main reserve equal to the fraction of the voting share that voted yes.
  2. Grow this reserve from the yes-voters to meet the requirements, and pay out of that.
  3. Any remaining money in it goes back in the original reserve.

Why not use the minority payment exception everywhere? It’s a good question. I think it increases variability of what the organisation can achieve quite a lot, and I’m not sure this is a good thing. It feels like having the minority payment option means that the organisation works less effectively as a whole. More importantly, it means that the default behaviour if you’re not really paying attention is to opt-out of paying any money, which seems unfortunate. It might be a good idea anyway.

Implementation

This requires a moderate amount of book-keeping, but it’s not too much to do by hand/with a spreadsheet and email. Software could help, but it’s probably not necessary at small scales.

Obviously I’ve no experience in running non-profits, so it may be that I’m ignoring human factors / vastly overestimating how easy it is to get people to actually pay money they’ve precommitted to paying.

I think it might be an interesting approach though, and I have some things I’d quite like to try it for. I don’t know if I can actually overcome the annoyance of moving money around to do that, but we’ll see.

This entry was posted in Uncategorized on by .