# Shaping the World

I gave a keynote at PyCon UK recently – it was mostly about the book “Seeing Like A State” and what software developers can learn from it about our effect on the world.

I’ve been meaning edit it up into a blog post, and totally failing to get around to it, so in lieu of that, here’s my almost entirely unedited script – it’s not that close to the version I actually got up on stage and said, because I saw 800 people looking at me and panicked and all the words went out of my head (apparently this was not at all obvious to people), but the general themes of the two are the same and neither is strictly better than the other – if you prefer text like a sensible person, read this post. If you prefer video, the talk is supposedly pretty good based on the number of people who have said nice things to me about it (I haven’t been able to bear to watch it yet).

The original slides are available here (warning: Don’t load on mobile data. They’re kinda huge). I’ve inserted a couple of the slide images into the post where the words don’t make sense without the accompanying image, but otherwise decided not to clutter the text with images (read: I was too lazy).

Hi, I’m David MacIver. I’m here to talk to you today about the ways which we, as software developers, shape the world, whether we want to or not.

This is a talk about consequences. Most, maybe all, of you are good people. The Python community is great, but I’d be saying that anywhere. Most people are basically good, even though it doesn’t look that way sometimes. But unless you know about what effect your actions have, all the good intentions in the world won’t help you, and good people can still make the world a worse place. I’m going to show you some of the ways that I think we’re currently doing that.

The tool I’m going to use to do this is cultural anthropology: The study of differences and similarities between different cultures and societies. I’m not a cultural anthropologist. I’ve never even taken a class on it, I’ve just read a couple of books. But I wish I’d read those books earlier, and I’d like to share with you some of the important lessons for software development that I’ve drawn from them.

In particular I’d like to talk to you about the work of James C. Scott, and his book “Seeing like a state”. Seeing like a state is about the failure modes of totalitarian regimes, and other attempts to order human societies, which are surprisingly similar to some of the failure modes of software projects. I do recommend reading the book. If you’re like me and not that used to social science writing, it’s a bit of a heavy read, but it’s worth doing. But for now, I’ll highlight what I think are the important points.

Binary Tree by Derrick Coetzee

Before I talk about totalitarian states, I’d like to talk about trees. If you’re a computer scientist, or have had an unfortunate developer job interview recently, a tree is probably something like this. It has branches and leaves, and not much else.

If you’re anyone else, a tree is rather different. It’s a large living organism. It has leaves and branches, sure, but it also has a lot of other context and content. It provides shade, maybe fruit, it has a complex root system. It’s the center of its own little ecosystem, providing shelter and food for birds, insects, and other animals. Compared to the computer scientist’s view of a tree it’s almost infinitely complicated.

But there’s another simplifying view of a tree we could have taken, which is that of the professional forester. A tree isn’t a complex living organism, it’s just potential wood. The context is no longer relevant, all we really care about the numbers – it costs this much to produce this amount of this grade of wood and, ultimately, this amount of money when you sell the wood.

This is a very profitable view of a tree, but it runs into some difficulties. If you look at a forest, it’s complicated. You’ve got lots of different types of trees. Some of them are useful, some of them are not – not all wood is really saleable, some trees are new and still need time to grow, trees are not lined up with each other so you have to navigate around ones you didn’t want. As well as the difficulty of harvesting, this also creates difficulty measuring – even counting the trees is hard because of this complexity, let alone more detailed accounting of when and what type of wood will be ready, so how can you possibly predict how much wood you’re going to harvest and thus plan around what profit you’re going to make? Particularly a couple of hundred years ago when wood was the basis of a huge proportion of the national economy, this was a big deal. We have a simple view of the outcomes we want, but the complex nature of reality fights back at our attempts to achieve that. So what are we going to do?

Well, we simplify the forest. If the difficulty in achieving our simple goals is that reality is too complicated, we make the reality simpler. As we cut down the forest, we replant it with easy to manage trees in easy to manage lines. We divide it into regions where all of the trees are of the same age. Now we have a relatively constant amount of wood per unit of area, and we can simply just log an entire region at once, and now our profits become predictable and, most importantly, high.

James Scott talks about this sort of thing as “legibility”. The unmanaged forest is illegible – we literally cannot read it, because it has far more complexity than we can possibly hope to handle – while, in contrast, the managed forest is legible – we’ve reshaped its world to be expressible in a small number of variables – basically just the land area, and the number of regions we’ve divided it into. The illegible world is unmanageable, while the legible world is manageable, and we can control it by adjusting a small number of parameters.

In a technical sense, legibility lets us turn our control over reality into optimisation problems. We have some small number of variables, and an outcome we want to optimise for, so we simply reshape the world by finding the values of those variables that maximize that outcome – our profits. And this works great – we have our new simple refined world, and we maximize our profit. Everyone is happy.

Oh, sure, there are all those other people who were using the forest who might not be entirely happy. The illegible natural forest contains fruit for gathering, brush to collect for firewood, animals for hunting, and a dozen other uses all of which are missing from our legible managed forest. Why? Well because those didn’t affect our profit. The natural behaviour of optimisation processes is to destroy everything in their path that isn’t deliberately preserved or directly required for their outcome. If the other use cases didn’t result in profit for us, they’re at best distractions or at worst impediments. Either way we get rid of them. But those only matter to the little people, so who cares? We’re doing great, and we’re making lots of money.

At least, for about eighty years, at which point all of the trees start dying. This really happened. These days, we’re better a bit better at forest management, and have figured out more of which complexity is necessary and which we can safely ignore, but in early scientific forestry, about 200 years ago in Germany, they learned the hard way that a lot of things they had thought weren’t important really were. There was an entire complex ecological cycle that they’d ignored, and they got away with it for about 80 years because they had a lot of high quality soil left over from that period that they could basically strip mine for a while. But the health of the forest deteriorated over time as the soil got worse, and eventually the trees were unhealthy enough that they started getting sick. And because all of the trees were the same, when one got sick it spread like wildfire to the others. They called it Waldsterben – forest death.

The problem that the German scientific foresters ran into is that complex, natural, systems are often robust in ways that simple, optimised systems are not. They’ve evolved over time, with lots of fiddly little details that have occurred locally to adapt to and patch over problems. Much of that illegibility turns out not to be accidental complexity, but instead the adaptation that was required to make the system work at all. That’s not to say all complexity is necessary, or that there isn’t a simpler system that also works, but if the complexity is there, chances are we can’t just remove it without replacing it with something else and assume the system will keep working, even if it might look like it does for a while.

This isn’t actually a talk about trees, but it is a talk about complexity, and about simplification. And it’s a talk about what happens when we apply this kind of simplification process to people. Because it turns out that people are even more complicated than trees, and we have a long history of trying to fix that, to take complex, messy systems of people and produce nice, simple, well behaved social orders that follow straightforward rules.

This is what James Scott calls Authoritarian High-Modernism – the desire to force people to fit into some rational vision of the world. Often this is done for entirely virtuous reasons – many authoritarian high-modernist projects are utopian in nature – we want everyone to be happy and well fed and fulfilled in their lives. Often they are less virtuous – totalitarian regimes love forcing people into their desired mould. But virtuous or not, they often fail in the same way that early scientific forestry did. Seeing like a state has a bunch of good examples of this. I won’t go into them in detail, but here’s a few.

Portland Street, Southampton, England, by Gary Burt

An amusing example is buildings like this. Have you seen these? Do you know why there are these bricked up windows? Well it’s because of window taxes. A while back, income tax was very unpopular. Depending on who you ask, maybe it still is, but it was even more so back then. But the government wanted to extract money from its citizens. What could they do? Well, they could tax where people live by size – rich people live in bigger buildings – but houses are often irregularly shaped, so measuring the size of the house is hard, but there’s a nice, simple,convenient proxy for it – the number of windows. So this is where windows taxes come from – take complex, messy, realities of wealth and pick a simple proxy for it, you pick a simple proxy for that, and and you end up taxing the number of windows. Of course what happens is that people brick up their windows to save on taxes. And then suffer health problems from lack of natural light and proper ventilation in their lives, which is less funny, but so it goes.

Another very classic example that also comes from taxation is the early history of the Cadastral, or land-use, map. We want to tax land-use, so we need to know who owns the land. So we create these detailed land-use maps which say who owns what, and we tax them accordingly. This seems very straight forward, right? But in a traditional village this is nonsense. Most land isn’t owned by any single person – there are complex systems of shared usage rights. You might have commons on which anyone can graze their animals, but where certain fruit trees are owned, but everyone has the rights to use fallen fruit. It’s not that there aren’t notions of ownership per se, but they’re very fine grained and contextual, and they shift according to a complex mix of circumstance and need. The state doesn’t care. These complex shared ownerships are illegible, so we force people to conform instead to the legible idea of single people or families owning each piece of land. This is where a lot of modern notions of ownership come from by the way – the state created them so they could collect more tax.

And of course we have the soviet union’s program of farm collectivization, which has the state pushing things in entirely the opposite direction. People were operating small family owned farms, which were adapted to their local conditions and what grew well where they were. A lot of it was subsistence farming, particularly in lean times – when you had excess, you sold it. When you didn’t, you lived off the land. This was hard to manage if you’re a state who wants to appropriate large quantities of food to feed your army and decide who is deserving and who gets what. So they forcibly moved everyone to work on large, collective, farms which grew what the state wanted, typically in large fields of monocultures that ignored the local conditions. From a perspective of producing enough food, this worked terribly. The large, collectivized, farms, produced less food less reliably than the more distributed, adapted, local farms. The result was famine which killed millions. But from the point of view of making the food supply legible, and allowing the state to control it, the system worked great, and the soviets weren’t exactly shy about killing millions of people, so the collectivization program was largely considered a success by them, though it did eventually slow and stop before they converted every farm.

But there’s another, more modern, example of all of these patterns. We have met the authoritarians and they are us. Tech may not look much like a state, even ignoring its strongly libertarian bent, but it has many of the same properties and problems, and every tech company is engaged in much the same goal as these states were: Making the world legible in order to increase profit.

Every company does this to some degree, but software is intrinsically a force for legibility. A piece of software has some representation of the part of the world that it interacts with, boiling it down to the small number of variables that it needs to deal with. We don’t necessarily make people conform to that vision, but we don’t have to – as we saw with the windows, people will shape themselves in response to the incentives we give them,as long as we are able to reward compliance with our vision or punish deviance from it..

When you hear tech companies talk about disruption, legibility is at the heart of what we’re doing. We talk about efficiency – taking these slow, inefficient, legacy industries and replacing them with our new, sleek, streamlined versions based on software. But that efficiency comes mostly from legibility – it’s not that we’ve got some magic wand that makes everything better, it’s that we’ve reduced the world to the small subset of it that we think of as the important bits, and discarded the old, illegible, reality as unimportant.

And that legibility we impose often maps very badly to the actual complexity of the world. You only have to look at the endless stream of falsehoods programmers believe articles to get a sense of how much of the world’s complexity we’re ignoring. It’s not just programmers of course – if anything the rest of the company is typically worse – but we’re still pretty bad. We believe falsehoods about names, but also gender, addresses, time, and  many more.

This probably still feels like it’s not a huge problem. Companies are still not states. We’re not forcing things on anyone, right? If you don’t use our software, nobody is going to kick down your door and make you. Much of the role of the state is to hold a monopoly on the legitimate use of physical force, and we don’t have access to that. We like to pretend makes some sort of moral difference. We’re just giving people things that they want, not forcing them to obey us.

Unfortunately, that is a fundamental misunderstanding of the nature of power. Mickey Mouse, despite his history of complicity in US racism, has never held a gun to anyone’s head and forced them to do his bidding, outside of a cartoon anyway. Nevertheless he is almost single-handedly responsible for reshaping US copyright law, and by extension copyright law across most of the world. When Mickey Mouse is in danger of going out of copyright, US copyright law mysteriously extends the length of time after the creator’s death that works stay in copyright. We now live in a period of eternal copyright, largely on the strength of the fact that kids like Mickey Mouse.

This is what’s called Soft Power. Conventional ideas of power are derived from coercion – you make someone do what you want – while soft power is power that you derive instead from appeal – People want to do what you want. There are a variety of routes to soft power, but there’s one that has been particularly effective for colonising forces, the early state, and software companies. It goes like this.

First you make them want what you have, then you make them need it.

The trick is to to basically ease people in – you give them a hook that makes your stuff appealing, and then once they’re use to it they can’t do without. Either because it makes their life so much better, or because in the new shape of the world doing without it would make their life so much worse. These aren’t the same thing. There are some common patterns for this, but there are three approaches that have seen a lot of success that I’d like to highlight

The first is that you create an addiction. You sell them alcohol, or you sell them heroin. The first one’s free – just a sampler, a gift of friendship. But hey, that was pretty good. Why not have just a little bit more… Modern tech companies are very good at this. There’s a whole other talk you could give about addictive behaviours in modern software design. But, for example, I bet a lot of you find yourselves compulsively checking Twitter. You might not want to – you might even want to quit it entirely – but the habit is there. I’m certainly in this boat. That’s an addictive behaviour right there, and perhaps it wasn’t deliberately created, but it sure looks like it was.

The second strategy is that you can sell them guns. Arms dealing is great for creating dependency! You get to create an arms race by offering them to both sides, each side buys it for fear that the other one will, and now they have to keep buying from you because you’re the only one who can supply them bullets. Selling advertising and social media strategies to companies works a lot like this.

The third is you can sell them sugar. It’s cheap and delicious! And is probably quite bad for you and certainly takes over your diet, crowding out other more nutritious options. Look at companies who do predatory pricing, like Uber. It’s great – so much cheaper than existing taxis, and way more convenient than public transport, right? Pity they’re going to hike the prices way up when they’ve driven the competition into the ground and want to stop hemorrhaging money.

And we’re going to keep doing this, because this is the logic of the market. If people don’t want and need our product, they’re not going to use it, we’re not going to make money, and your company will fail and be replaced by one with no such qualms. The choice is not whether or not to exert soft power, it’s how and to what end.

I’m making this all sound very bleak, as if the things I’m talking about were uniformly bad. They’re not. Soft power is just influence, and it’s what happens every day as we interact with people. It’s an inevitable part of human life. Legibility is just an intrinsic part of how we come to understand and manipulate the world, and is at the core of most of the technological advancements of the last couple of centuries. Legibility is why we have only a small number of standardised weights and measures instead of a different notion of a pound or a foot for every village.

Without some sort of legible view of the world, nothing resembling modern civilization would be possible and, while modern civilization is not without its faults, on balance I’m much happier for it existing than not.

But civilizations fall as well as rise, and things that seemed like they were a great idea in the short term often end in forest death and famine. Sometimes it turns out that what we were disrupting was our life support system.

And on that cheerily apocalyptic note, I’d like to conclude with some free advice on how we can maybe try to do a bit better on that balancing act. It’s not going to single handedly save the world, but it might make the little corners of it that we’re responsible for better.

My first piece of free advice is this: Richard Stallman was right. Proprietary software is a harbinger of the end times, and an enemy of human flourishing. … don’t worry, I don’t actually expect you to follow this one. Astute observers will notice that I’m actually running Windows on the computer I’m using to show these slides, so I’m certainly not going to demand that you go out and install Linux, excuse me, GNU/Linux, and commit to a world of 100% free software all the time. But I don’t think this point of view is wrong either. As long as the software we use is not under our control, we are being forced to conform to someone else’s idea of the legible world. If we want to empower users, we can only do that with software they can control. Unfortunately I don’t really know how to get there from here, but a good start would be to be better about funding open source.

In contrast, my second piece of advice is one that I really do want you all to follow. Do user research, listen to what people say, and inform your design decisions based on it. If you’re going to be forming a simplified model of the world, at least base it on what’s important to the people who are going to be using your software.

And finally, here’s the middle ground advice that I’d really like you to think about. Stop relying on ads. As the saying goes, if your users aren’t paying for it, they’re not the customer, they’re the product. The product isn’t a tree, it’s planks. It’s not a person, it’s data. Ads and adtech are one of the most powerful forces for creating a legible society, because they are fundamentally reliant on turning a complex world of people and their interactions into simple lists of numbers, then optimising those numbers to make money. If we don’t want our own human shaped version of forest death, we need to figure out what important complexity we’re destroying, and we need to stop doing that.

And that is all I have to say to you today. I won’t be taking questions, but I will be around for the rest of the conference if you want to come talk to me about any of this. Thank you very much.

This entry was posted in Performing philosophy without a license, Python on by .

# Python Coverage could be fast

Ned Batchelder’s coverage.py is a foundation of the Python testing ecosystem. It is solid, well maintained, and does its job extremely well. I think literally every Python project that cares about testing should be using it.

But it’s not without its faults. Specifically, its performance can be quite bad. On some workloads it’s absolutely fine, but on others you can see anything up to an order of magnitude slow down (and this is just on CPython. On pypy it can be even worse).

Recently, after some rather questionable late night hacking (and a significant amount of fixing not late at night), I finally made good on my promise that Hypothesis would eventually use coverage information and shipped Hypothesis 3.29.0 which added a dependency on Coverage and turned it on for every Hypothesis based test.

This hasn’t been an entirely smooth process – some for reasons that are my fault, and some that users are now running into these performance problems.

The right place to fix this is obviously in Coverage rather than Hypothesis itself, so I’ve been looking into this recently. I’ve already made one patch which gives branch coverage a 50% speedup in a loop-based microbenchmark and about a 20% speedup on one real world example I tried it on.

The idea of the patch is very straightforward (though apparently I’m a unusual in thinking that “Here I wrote you a hash table!!!” is a straightforward patch). Coverage creates a lot of Python objects in a very hot part of the code, so this caches them off an integer key so that most of the time it can omit creating those objects and significantly speed things up as a result.

Unfortunately that’s probably it for now. My priorities for the immediate future are PhD, paid work, and conference prep, which means that I certainly don’t have any time in the next month and probably not the next couple of months (this could be fixed by making this paid work. I may do a kickstarter or something for that, but in the meantime if any interested companies wanted to fund this work I’d be more than happy to discuss it…).

So I thought I’d write down my notes before I forget. These are both for future-me and for anyone interested who feels motivated to work on this problem in the meantime.

### Initial Benchmarking

I haven’t done super formal benchmarking, but I set up pytest-benchmark with some basic benchmarks that just ran a loop adding numbers.

The benchmarking functionality itself was pretty great but I didn’t find it easy to compare benchmarks in the way that I wanted – in particular I had the same benchmark which was run in three different ways (no coverage, line coverage, branch coverage) and I wanted to break those down for side-by-side comparison, but I couldn’t find any functionality to do so (I admit I didn’t look very hard). It was nice having the statistics handled though and I will almost certainly want to sink some time into getting a decent pytest-benchmark suite for coverage if I work on this further.

### Real World Benchmarking

The real world benchmark I used was Alex Groce’s tstl, because his usage profile is similar to mine (there’s a lot of overlap between what Hypothesis and TSTL do), and he had an existing example that was seeing an order of magnitude slow down. This is the example that gets a 20% speedup from my above patch.

The problem can be reproduced as follows:

git clone https://github.com/agroce/tstl.git
cd tstl
virtualenv v
source v/bin/activate
pip install .
cd examples/AVL
tstl avlnodisp.tstl
tstl_rt --seed=0 --timeout=30
tstl_rt --seed=0 --timeout=30 --noCover

The thing to compare is the total number of test operations run in the outputs from each of the test_rt commands. For me I see “12192 TOTAL TEST OPERATIONS” with coverage, and “96938 TOTAL TEST OPERATIONS” without, so it runs about 8 times as many operations in the same time frame with coverage turned off (this is without my patch. With my patch I get 14665 under coverage, so about 20% more).

### Profiling Coverage

I confess I didn’t figure out how to profile coverage until after I made the above patch. I had a benchmark I was measuring, and just based on inspection I was more than 90% certain that the above would help, so I decided to just give it a go and validate my suspicion and turned out to be right.

But after I’d picked the most obvious low hanging fruit I figured it would be silly to try to proceed further without getting profiling set up, so I poked around. I spent a little time trying to get google-perf-tools working with Python and failing, but eventually figured out that I could do it with perf and it works great (modulo quite a lot of hacking and data munging).

The basic idea with perf is that you run your program under “perf record” and it gives you raw output data. You can then do analysis on this to find out about your program’s performance.

The first thing to do to use perf is that you need to make sure that everything is compiled with debug symbols. This includes both your extension and Python itself.

To get a Python with debug symbols I used pyenv‘s python-build plugin:

export PYTHON_CFLAGS='-pg'
~/.pyenv/plugins/python-build/bin/python-build 2.7.13 ~/debug-python2

This builds a version of Python 2 (TSTL doesn’t work under Python 3) with the “-pg” flag to gcc which includes debug symbols. I also modified setup.py for coverage to include  extra_compile_args=[‘-pg’] (it should be possible to do this with an environment variable, but I didn’t try).

Once I had that, running under perf was straightforward:

perf record tstl_rt --seed=0 --timeout=30

This creates a file called perf.data that you can analyze. I did not find prof report, the default way of analyzing it, super helpful, so I used CPU Flame Graphs.

I was only interested in the performance for calls below CTracer_trace, and I didn’t find the way it was spread out in the SVG (there were a lot of bits) very helpful, so I ended up aggregating the data through some um very sophisticated data analysis tools as follows:

perf script > out.perf && \
~/scratch/FlameGraph/stackcollapse-perf.pl out.perf | \
grep CTracer | sed 's/.\+;CTracer_trace/CTracer_trace/' | \
sort | \
python sum.py > out2.folded
~/scratch/FlameGraph/flamegraph.pl out2.folded > out.svg

sum.py is the following very basic code:

from __future__ import print_function

import sys

if __name__ == '__main__':
prev = None
for l in sys.stdin:
u, v = l.split()
v = int(v)
if prev is None:
prev = u
count = v
elif prev == u:
count += v
else:
print(prev, count)
prev = u
count = v
print(prev, count)



(the data munging earlier creates duplicated entries, so this merges them together).

WordPress won’t let me upload the generated SVG “For Security Reasons” (that do not apparently preclude running WordPress itself), so here’s a gist of it, and her’es one from before my patch was applied (right click and view image in a new tab to get a usable interactive version of it)

### pypy

PyPy performance for coverage is more complicated. My understanding is that there are roughly three classes of problems here:

1. coverage itself is not as well optimised as it could be (same problem as CPython)
2. Using settrace interferes with the JIT
3. The baseline speed of pypy operations is much faster so coverage is a much higher overhead by comparison.

The first is the only one I could deal with, and I think it probably will benefit significantly from whatever changes I make to the C extension also being ported over to the pure Python version (pypy doesn’t use the C extension tracer because it’s even slower than the pure python one on pypy), but I’m not sure that will be enough – I suspect there are also going to be more changes to pypy internals required for this, and I don’t know enough about those to say how difficult or easy they are.

### The Limits of What Is Possible

Python coverage is never going to run at the speed of Python without coverage, especially on pypy.

You can see the limits of how much faster it could be by running with an empty trace function (note: Whether you are using a C or Python level trace function makes a big difference. sys.settrace is ruinously slow even with an empty function).

The difference varies significantly depending on your code though – with the TSTL workload above I see a 50% slow down with an empty C level trace function. With more microbenchmark style workloads with few functions and lots of looping I see almost no speed loss.

So my suspicion is that for CPython at least we can get coverage to reliably be within a factor of two of running without it, and maybe reliably within a factor of 1.5.

### What next

For now my plan is to shepherd the patch from the beginning of this post and otherwise ignore this problem for the next couple of months.

Based on the profiling I did above most of the time is currently being spent in PyDict_SetItem, so I think the obvious next line of attack when I do start working on this is to replace the file_data objects in coverage which currently use Python dictionaries keyed off Python values with some sort of specialized data structure (possibly another hash table, possibly something better optimized for write heavy workloads). Longer term I think the goal should be move all calls back into Python land out of the trace function and just normalize at the end of tracing.

Even the more modest goal is a bit more invasive of a change than I wanted to start with, hence the above rather more conservative patch, but there’s nothing conceptually difficult to it – it just involves a bunch of slog and basic engineering work followed by some experimenting with clever data structure designs.

Once I’ve made it through the next month or two I’ll start seeing about getting some work on this funded. I’ve already suggested the notion to an existing customer who I know is having problems with coverage performance, but if they don’t want to fund it (which would be totally understandable) I’ll look further afield.

My fall back plan is a Kickstarter for this, but honestly I think some motivated company who is suffering from this should just think about picking up the tab.

I’ve been doing a bunch of funded work on Hypothesis for Stripe and Smarkets (here and here so far, with more to come) and it’s been going great – it’s worked out well for me, them, and Hypothesis users at large, and I don’t see why other projects shouldn’t benefit from the same (I’d encourage paying people who are actually maintainers of those projects by default, but many of them including Ned have full time jobs and I’m more flexibly available).

We’re not talking about a lot of money – a couple of weeks of development work (and, say, a 10-20% extra consultancy fee for Ned) should be enough to see some serious performance improvements, and as well as making your developers much happier, you’ll also earn some serious community good will (which is great when it comes to hiring new developers). That’s probably less than a month of your normal meeting schedule costs you.

This is a problem that affects a large majority of people who care about testing Python, which should include most commercial Python users, so if that’s you why not get in touch?

This entry was posted in Python on by .

# The other half of binary search

Binary search is one of those classic algorithms that most people who know about algorithms at all will know how to do (and many will even be able to implement correctly! Probably fewer than think they can though – it took me a long time to go to thinking I could implement binary search correctly to actually being able to implement it correctly).

Some of this is because the way people think about binary search is somewhat flawed. It’s often treated as being about sorted arrays data, when that’s really only one application of it. So lets start with a review of what the right way to think about binary search is.

We have two integers $$a$$ and $$b$$ (probably non-negative, but it doesn’t matter), with $$a < b$$. We also have some function that takes integers $$a \leq i \leq b$$, with $$f(a) \neq f(b)$$. We want to find $$c$$ with $$a \leq c < b$$ such that $$f(c) \neq f(c+ 1)$$.

i.e. we’re looking to find a single exact point at which a function changes value. For functions that are monotonic (that is, either non-increasing or non-decreasing), this point will be unique, but in general it may not be.

To recover the normal idea of binary search, suppose we have some array $$xs$$ of length $$n$$. We want to find the smallest insertion point for some value $$v$$.

To do this, we can use the function $$f(i)$$ that that returns whether $$xs[i] < v$$. Either this function is constantly true (in which case every element is < v and v should be inserted at the end), constantly false (in which case v should be inserted at the beginning), or the index i with $$f(i) \neq f(i + 1)$$ is the point after which $$v$$ should be inserted.

This also helps clarify the logic for writing a binary search:

def binary_search(f, lo, hi):
# Invariant: f(hi) != f(lo)
while lo + 1 < hi:
assert f(lo) != f(hi)
mid = (lo + hi) // 2
if f(mid) == f(lo):
lo = mid
else:
hi = mid
return lo


Every iteration we cut the interval in half - because we know the gap between them is at least one, this must reduce the length. If $$f$$ gives the same value to the midpoint as to lo, it must be our new lower bound, if not it must be our new upper bound (note that generically in this case we might not have $$f(mid) = f(hi)$$, though in the typical case where $$f$$ only takes two values we will).

Anyway, all of this is besides the point of this post, it's just scene setting.

Because the point of this post is this: Is this actually optimal?

Generically, yes it is. If we consider the functions $$f_k(i) = i < k$$, each value we examine can only cut out half of these functions, so we must ask at least $$\log_2(b - a)$$ questions, so binary search is optimal. But that's the generic case. In a lot of typical cases we have something else going for us: Often we expect change to be quite frequent, or at least to be very close to the lower bound. For example, suppose we were binary searching for a small value in a sorted list. Chances are good it's going to be a lot closer to the left hand side than the right, but we're going to do a full $$\log_2(n)$$ calls every single time. We can solve this by starting the binary search with an exponential probe - we try small values, growing the gap by a factor of two each time, until we find one that gives a different value. This then gives us a (hopefully smaller) upper bound, and a lower bound somewhat closer to that.

def exponential_probe(f, lo, hi):
gap = 1
while lo + gap < hi:
if f(lo + gap) == f(lo):
lo += gap
gap *= 2
else:
return lo, lo + gap
return lo, hi


We can then put these together to give a better search algorithm, by using the exponential probe as the new upper bound for our binary search:

def find_change_point(f, lo, hi):
assert f(lo) != f(hi)
return binary_search(f, *exponential_probe(f, lo, hi))


When the value found is near or after the middle, this will end up being more expensive by a factor of about two - we have to do an extra $$\log_2(n)$$ calls to probe up to the midpoint - but when the heuristic wins it potentially wins big - it will often take the $$\log_2(n)$$ factor (which although not huge can easily be in the 10-20 range for reasonable sized gaps) and turn it into 1 or 2. Complexity wise, this will run in $$O(\log(k - lo)$$, where $$k$$ is the value returned, rather than the original $$O(hi - lo)$$.

This idea isn't as intrinsically valuable as binary search, because it doesn't really improve the constant factors or the response to random data, but it's still very useful in a lot of real world applications. I first came across it in the context of timsort, which uses this to find a good merge point when merging two sublists in its merge step.

Edit to note: It was pointed out to me on Twitter that I'm relying on python's bigints to avoid the overflow problem that binary search will have if you implement it on fixed sized integers. I did know this at one point, but I confess I had forgotten. The above code works fine in Python, but if int is fixed size you want the following slightly less clear versions:

def midpoint(lo, hi):
if lo <= 0 and hi >= 0:
return (lo + hi) // 2
else:
return lo + (hi - lo) // 2

def binary_search(f, lo, hi):
# Invariant: f(hi) != f(lo)
while lo + 1 < hi:
assert f(lo) != f(hi)
mid = midpoint(lo, hi)
if f(mid) == f(lo):
lo = mid
else:
hi = mid
return lo

def exponential_probe(f, lo, hi):
gap = 1
midway = midpoint(lo, hi)
while True:
if f(lo + gap) == f(lo):
lo += gap
if lo >= midway:
break
else:
gap *= 2
else:
hi = lo + gap
break
return lo, hi


These avoid calculating any intermediate integers which overflow in the midpoint calculation:

• If $$lo \leq 0$$ and $$hi \geq 0$$ then $$lo \leq hi + lo \leq hi$$, so is representable.
• If $$lo \geq 0$$ then $$0 \leq hi - lo \leq hi$$, so is representable.

The reason we need the two different cases is that e.g. if $$lo$$ were INT_MIN and $$hi$$ were INT_MAX, then $$hi - lo$$ would overflow but $$lo + hi$$ would be fine. Conversely if $$lo$$ were INT_MAX - 1 and $$hi$$ were INT_MAX, $$hi - lo$$ would be fine but $$hi + lo$$ would overflow.

The following should then be a branch free way of doing the same:

def midpoint(lo, hi):
large_part = lo // 2 + hi // 2
small_part = ((lo & 1) + (hi & 1)) // 2
return large_part + small_part


We calculate (x + y) // 2 as x // 2 + y // 2, and then we fix up the rounding error this causes by calculating the midpoint of the low bits correctly. The intermediate parts don't overflow because we know the first sum fits in $$[lo, hi]$$, and the second fits in $$[0, 1]$$. The final sum also fits in $$[lo, hi]$$ so also doesn't overflow.

I haven't verified this part too carefully, but Hypothesis tells me it at least works for Python's big integers, and I think it should still work for normal C integers.

This entry was posted in programming, Python on by .

# Linear time (ish) test case reduction

A problem I’m quite interested in, primarily from my work on Hypothesis, is test case reduction: Taking an example that produces a bug, and producing a smaller version that triggers the same bug.

Typically a “test-case” here means a file that when fed to a program triggers a crash or some other wrong behaviour. In the abstract, I usually think of sequences as the prototypical target for test case reduction. You can view a file as a sequence in a number of usefully different ways – e.g. as bytes, as lines, etc. and they all are amenable to broadly similar algorithms.

The seminal work on test-case reduction is Zeller and Hildebrant’s “Simplifying and Isolating Failure-Inducing Input“, in which they introduce delta debugging. Delta-debugging is essentially an optimisation to the greedy algorithm which removes one item at a time from the sequence and sees if that still triggers the bug. It repeatedly applies this greedy algorithm to increasingly fine partitions of the test case, until the partition is maximally fine.

Unfortunately the greedy algorithm as described in their paper, and as widely implemented, contains an accidentally quadratic bug. This problem is not present in the reference implementation, but it is present in many implementations of test case reduction found in the wild, including Berkeley delta and QuickCheck. Hypothesis gets this right, and has for so long that I forgot until quite recently that this wasn’t more widely known.

To see the problem, lets look at a concrete implementation. In Python, the normal implementation of the greedy algorithm looks something like this:

def greedy(test_case, predicate): while True: for i in range(len(test_case)): attempt = list(test_case) del attempt[i] if predicate(attempt): test_case = attempt break else: break return test_case

We try deleting each index in turn. If that succeeds, we start again. If not, we’re done: No single item can be deleted from the list.

But consider what happens if our list is, say, the numbers from 1 through 10, and we succeed at deleting a number from the list if and only if it’s even.

When we run this algorithm we try the following sequence of deletes:

• delete 1 (fail)
• delete 2 (succeed)
• delete 1 (fail)
• delete 3 (fail)
• delete 4 (succeed)
• delete 1 (fail)

Every time we succeed in deleting an element, we start again from scratch. As a result, we have a classic accidentally quadratic bug.

This is easy to avoid though. Instead of starting again from scratch, we continue at our current index:

def greedy(test_case, predicate): deleted = True while deleted: deleted = False i = 0 while i < len(test_case): attempt = list(test_case) del attempt[i] if predicate(attempt): test_case = attempt deleted = True else: i += 1 return test_case

At each stage we either successfully reduce the size of the test case by 1 or we advance the loop counter by one, so this loop makes progress. It stops in the same case as before (when we make it all the way through with no deletions), so it still achieves the same minimality guarantee that we can’t delete any single element.

The reason this is only linear time “ish” is that you might need to make up to n iterations of the loop (this is also true with the original algorithm) because deleting an element might unlock another element. Consider e.g. the predicate that our sequence must consist of the numbers 1, …, k for some k – we can only every delete the last element, so we must make k passes through the list. Additionally, if we consider the opposite where it must be the numbers from k to n for some k, this algorithm is quadratic while the original one is linear!

I believe the quadratic case to be essential, and it’s certainly essential if you consider only algorithms that are allowed to delete at most one element at a time (just always make the last one they try in any given sequence succeed), but anecdotally most test cases found in the wild don’t have nearly this high a degree of dependency among them, and indexes that previously failed to delete tend to continue to fail to delete.

A model with a strong version of this as its core assumption shows up the complexity difference: Suppose you have $$k$$ indices out of $$n$$ which are essential, and every other index can be deleted. Then this algorithm will always run in $$n + k$$ steps ($$n$$ steps through the list the first time, $$k$$ steps at the end to verify), while the classic greedy algorithm will always run in $$O(k^2 + n)$$ steps.

Although this model is overly simplistic, anecdotally examples found in the wild tend to have something like this which acts as an “envelope” of the test case – there’s some large easily deletable set and a small essential core. Once you’re down to the core you have to do more work to find deletions, but getting down to the core is often the time consuming part of the process.

As a result I would be very surprised to find any instances in the wild where switching to this version of the algorithm was a net loss.

This entry was posted in programming, Python on by .

# A worked example of designing an unusual data structure

Due to reasons, I found myself in need of a data structure supporting a slightly unusual combination of operations. Developing it involved a fairly straightforward process of refinement, and a number of interesting tricks, so I thought it might be informative to people to walk through (a somewhat stylised version of) the design.

The data structure is a particular type of random sampler, starting from a shared array of values (possibly containing duplicates). Values are hashable and comparable for equality.

It needs to support the following operations:

1. Initialise from a random number generator and a shared immutable array of values so that it holds all those values.
2. Sample an element uniformly at random from the remaining values, or raise an error if there are none.
3. Unconditionally (i.e. without checking whether it’s present) remove all instances of a value from the list.

The actual data structure I want is a bit more complicated than that, but those are enough to demonstrate the basic principles.

What’s surprising is that you can do all of these operations in amortised O(1). This includes the initialisation from a list of n values!

The idea behind designing this is to start with the most natural data structure that doesn’t achieve these bounds and then try to refine it until it does. That data structure is a resizable array. You can sample uniformly by just picking an index into the array. You can delete by doing a scan and deleting the first element that is equal to the value. This means you have to be able to mutate the array, so initalising it requires copying.

Which means it’s time for some code.

Let’s start by writing some code.

First lets write a test suite for this data structure:

from collections import Counter   from hypothesis.stateful import RuleBasedStateMachine, rule, precondition import hypothesis.strategies as st   from sampler import Sampler     class FakeRandom(object): def __init__(self, data): self.__data = data   def randint(self, m, n): return self.__data.draw(st.integers(m, n), label="randint(%d, %d)" % ( m, n ))     class SamplerRules(RuleBasedStateMachine): def __init__(self): super(SamplerRules, self).__init__() self.__initialized = False   @precondition(lambda self: not self.__initialized) @rule( values=st.lists(st.integers()).map(tuple), data=st.data() ) def initialize(self, values, data): self.__initial_values = values self.__sampler = Sampler(values, FakeRandom(data)) self.__counts = Counter(values) self.__initialized = True   @precondition(lambda self: self.__initialized) @rule() def sample(self): if sum(self.__counts.values()) != 0: v = self.__sampler.sample() assert self.__counts[v] != 0 else: try: self.__sampler.sample() assert False, "Should have raised" except IndexError: pass   @precondition(lambda self: self.__initialized) @rule(data=st.data()) def remove(self, data): v = data.draw(st.sampled_from(self.__initial_values)) self.__sampler.remove(v) self.__counts[v] = 0   TestSampler = SamplerRules.TestCase

This uses Hypothesis’s rule based stateful testing to completely describe the valid range of behaviour of the data structure. There are a number of interesting and possibly non-obvious details in there, but this is a data structures post rather than a Hypothesis post, so I’m just going to gloss over them and invite you to peruse the tests in more detail at your leisure if you’re interested.

Now lets look at an implementation of this, using the approach described above:

class Sampler(object): def __init__(self, values, random): self.__values = list(values) self.__random = random   def sample(self): if not self.__values: raise IndexError("Cannot sample from empty list") i = self.__random.randint(0, len(self.__values) - 1) return self.__values[i]   def remove(self, value): self.__values = [v for v in self.__values if v != value]

The test suite passes, so we’ve successfully implemented the operations (or our bugs are too subtle for Hypothesis to find in a couple seconds). Hurrah.

But we’ve not really achieved our goals: Sampling is O(1), sure, but remove and initialisation are both O(n). How can we fix that?

The idea is to incrementally patch up this data structure by finding the things that make it O(n) and seeing if we can defer the cost for each element until we actually definitely need to incur that cost to get the correct result.

Let’s start by fixing removal.

The first key observation is that if we don’t care about the order of values in a list (which we don’t because we only access it through random sampling), we can remove the element present at an index in O(1) by popping the element that is at the end of the list and overwriting the index with that value (if it wasn’t the last index). This is the approach normally taken if you want to implement random sampling without replacement, but in our use case we’ve separated removal from sampling so it’s not quite so easy.

The problem is that we don’t know where (or even if) the value we want to delete is in our array, so we still have to do an O(n) scan to find it.

One solution to this problem (which is an entirely valid one) is to have a mapping of values to the indexes they are found in. This is a little tricky to get right with duplicates, but it’s an entirely workable solution. It makes it much harder to do our O(1) initialize later though, so we’ll not go down this route.

Instead the idea is to defer the deletion until we know of an index for it, which we can do during our sampling! We keep a count of how many times a value has been deleted and, if we end up sampling it and the count is non-zero, we remove it from the list and decrement the count by one.

This means that we potentially pay an additional O(deletions) cost each time we sample, but these costs are “queued up” from the previous delete calls, and once incurred do not repeat, so this doesn’t break our claim of O(1) amortised time – the costs we pay on sampling are just one-off deferred costs from earlier.

Here’s some code:

class Sampler(object): def __init__(self, values, random): self.__values = list(values) self.__random = random self.__deletions = set()   def sample(self): while True: if not self.__values: raise IndexError("Cannot sample from empty list") i = self.__random.randint(0, len(self.__values) - 1) v = self.__values[i] if v in self.__deletions: replacement = self.__values.pop() if i != len(self.__values): self.__values[i] = replacement else: return v   def remove(self, value): self.__deletions.add(value)

So now we’re almost done. All we have to do is figure out some way to create a mutable version of our immutable list in O(1).

This sounds impossible but turns out to be surprisingly easy.

The idea is to create a mask in front of our immutable sequence, which tracks a length and a mapping of indices to values. Whenever we write to the mutable “copy” we write to the mask. Whenever we read from the copy, we first check that it’s in bounds and if it is we read from the mask. If the index is not present in the mask we read from the original sequence.

The result is essentially a sort of very fine grained copy on write – we never have to look at the whole sequence, only the bits that we are reading from, so we never have to do O(n) work.

Here’s some code:

from collections import Counter     class Sampler(object): def __init__(self, values, random): self.__values = values   self.__length = len(values) self.__mask = {} self.__random = random self.__deletions = set()   def sample(self): while True: if not self.__length: raise IndexError("Cannot sample from empty list") i = self.__random.randint(0, self.__length - 1) try: v = self.__mask[i] except KeyError: v = self.__values[i] if v in self.__deletions: j = self.__length - 1 try: replacement = self.__mask.pop(j) except KeyError: replacement = self.__values[j] self.__length = j if i != j: self.__mask[i] = replacement else: return v   def remove(self, value): self.__deletions.add(value)

And that’s it, we’re done!

There are more optimisations we could do here – e.g. the masking trick is relatively expensive, so it might make sense to switch back to a mutable array once we’ve masked off the entirety of the array, e.g. using a representation akin to the pypy dict implementation and throwing away the hash table part when the value array is of full length.

But that would only improve the constants (you can’t get better than O(1) asymptotically!), so I’d be loathe to take on the increased complexity until I saw a real world workload where this was the bottleneck (which I’m expecting to at some point if this idea bears fruit, but I don’t yet know if it will). We’ve got the asymptotics I wanted, so lets stop there while the implementation is fairly simple.

I’ve yet to actually use this in practice, but I’m still really pleased with the design of this thing.  Starting from a fairly naive initial implementation, we’ve used some fairly generic tricks to patch up what started out as O(n) costs and turn them O(1). As well as everything dropping out nicely, a lot of these techniques are probably reusable for other things (the masking trick in particular is highly generic).

Update 09/4/2017: An earlier version of this claimed that this solution allowed you to remove a single instance of a value from the list. I’ve updated it to a version that removes all values from a list, due to a comment below correctly pointing out that that approach biases the distribution. Fortunately for me in my original use case the values are all distinct anyway so the distinction doesn’t matter, but I’ve now updated the post and the code to remove all instances of the value from the list.

Do you like data structures? Of course you do! Who doesn’t like data structures? Would you like more data structures? Naturally. So why not sign up for my Patreon and tell me so, so you can get more exciting blog posts like this! You’ll get access to drafts of upcoming blog posts, a slightly increased blogging rate from me, and the warm fuzzy feeling of supporting someone whose writing you enjoy.

This entry was posted in programming, Python on by .