On Efficiency

Epistemic status: I’m reasonably confident this is how the world works, but I doubt I’m saying anything very new.

Suppose you’re writing some software and you need it to run as fast as possible. What happens?

Well, often what happens is that the software becomes much more complex. Code that has been heavily optimised for performance is typically much less readable and comprehensible than the naive, simple version – a myriad of special cases and tricks are deployed in order to squeeze out every drop of performance.

That means it’s also more likely to be buggy. Complex software is harder to test and reason about, so there are many more places for bugs to hide.

Many of these bugs are likely to be security bugs – you omit checks that slow things down, you rewrite it in an unsafe language like C or C++ to squeeze out the maximum performance. In doing so, many of the things that normally guard the security of your software end up getting lost.

It also takes longer and costs more to develop. Optimisation is a hard problem that requires skill and effort and endless experimentation to get right, as well as throwing away many of the conveniences that allow you to develop software quickly.

None of this is to say that optimising software for performance is bad. Sometimes you need performance and there’s nothing else for it. The point is simply that in optimising for performance, other things suffer.

That’s not to say that all of these things have to suffer. You can optimise for some combination of performance and correctness, but you’ll probably still get slightly worse performance and slightly worse correctness than if you optimised for either on their own, and you’ll definitely get much higher cost.

When optimising for anything, some rules apply:

  1. You cannot generically optimise. Optimisation is always towards a particular target.
  2. You cannot optimise for more than one thing.
  3. You can optimise for “one thing” that is a combination of multiple things.
  4. You can also optimise for one thing given some constraints on other things.
  5. Adding more factors to your optimisation makes the problem harder and causes all of the factors to do less well.
  6. Adding more constraints to your optimisation makes the problem harder and makes the optimisation target do less well.
  7. The harder the optimisation problem, the more the factors not included in it will tend to suffer.

Better optimisation processes can decrease the amount of “wasted” effort you’re expending, but even a perfect optimisation process can only move effort around, so if you’re optimising for something by focusing effort on it, that effort has to come from somewhere and that somewhere will tend to suffer – sometimes work is genuinely wasted, but often it’s only wasted in the sense that you decided you didn’t care about what it was doing.

Less optimised processes are often more robust, because they have a lot of slack that gives room for informal structures and individual decision making. As you optimise, and make the process more legible in aid of that optimisation, that slack goes away and many of the things that you didn’t even know your system was doing go away and the system fails disastrously because one of those things turns out to have been vital.

You can see this writ large in society too.

Consider the internet of things. People are optimising for profit, so they focus on things that affect that: shininess, cost, time to market, etc.

Security? Security doesn’t affect profit much, because people don’t really care about it until well after they’ve given you money. Which is why the internet of things has become a great source of botnets.

This is a problem in general with market-based economies. In economics terms, these extra factors are externalities – things that we declare to be somebody else’s problem that we don’t have to pay for.

Much is made of the efficient-market hypothesis – the idea that markets have strong predictive power by aggregating everyone’s information. This is then often used to conclude that markets are the optimal resource allocation mechanism.

Depending on where you are in the political space, people often then conclude either “And that’s why free-market economies rule and politics should stop interfering with the market” or “And that’s why the efficient market hypothesis is false”, or some more complex variant of one of the two.

I think some form of the efficient market hypothesis is probably true, and markets are probably a pretty good optimiser for the resource allocation problem. I doubt very much that they’re perfect, but they’re probably better than most of the alternatives we have.

But remember the rules? You can’t optimise without knowing what you’re optimising for, and you can’t optimise for more than one thing.

What markets optimise for in their default state is, more or less, some money weighted average of peoples’ preferences.

A lot of the critique from those of us on the left is really objecting to that “money weighted” part, which is fair enough (or at least, fair enough when we’ve got an income distribution as unequal as we currently do), but I’d like to focus on the second part. Pretend we’ve solved the first with a universal basic income or something if you need to.

The big problem with optimising for people’s preferences is that people’s preferences are often bad.

I don’t mean this in some general objective moral sense. I mean this in the sense that often if we got more information and sat down and thought about it properly we would come to a different conclusion than we would naturally.

But we’re faced with hundreds of small decisions a day, and we can’t possibly take the time to sit down properly and think about all of them. So we mostly rely on heuristics we can apply quickly instead.

And often those are the preferences that the market ends up optimising for.

This IoT lightbulb costs £15 while this one costs £20? No obvious difference? Well, obviously I’ll buy the £15 one. Who doesn’t like saving money?

Turns out the extra cost in the £20 was to pay for their security budget, to make sure their workers got living wages and good safety conditions, etc.

Unfortunately the market optimised away from that because those terms weren’t in the optimisation target and cost was. Maybe if people had been fully aware of those factors some of them would have switched to the more expensive light bulb, but most people aren’t aware and many of those who are would still decide they didn’t care that much.

And this is more or less inevitable. It is neither practical nor desirable to expect people to think through the long-term global implications of every decision they make (though it sure would be nice if people were at least a bit better at doing this), because expecting them to be fully aware of the consequences of each decision takes currently inconsequential decisions and makes them high effort. Given the number of decisions we have to make, this adds up very quickly.

We can fix this to some degree by adding more terms to the optimisation problem – e.g. government spending on projects, liability laws for what happens when you have security problems, etc. We can similarly fix this to some degree by adding constraints – adding security regulations, workplace safety regulations, minimum wages, etc.

We can’t fix this by hoping the market will self-correct, because the market isn’t wrong. It’s working as intended.

So, by their nature, all attempts to fix this will make the market less efficient. People will pay more for the same things, because if you add new terms or constraints then your optimisation target suffers.

Free-market enthusiasts tend to use that as an argument against regulation, and to some extent it is – the market is probably better at optimising than you are, and the effects of these changes can be hard to predict. Often it will route around them because it’s easier to do something you didn’t think of than it is to do what you want. So what you will often get is a system that is less efficient and still doesn’t do what you want.

But it’s also an argument for regulation. Yes it makes markets less efficient, but when the problem is that markets are optimising too heavily for the wrong thing, making them less efficient is the point. Regulating market behaviour may be hard, but fixing people’s preferences is for all intents and purposes impossible.

We need to treat the market with caution, and to be prepared to experiment before settling on a solution. I’m not at all sure today’s regulatory environments are even close to up to the task, but as long as we are ruthlessly optimising for a goal that ignores important factors, those factors will suffer and then in the long run we will all suffer the consequences.

We could also seek another way. If efficiency is the problem and not the solution, perhaps we need to step back from the optimiser all together and find a less market dominated approach. More and better democracy would perhaps be a good starting point, but if anything we seem to find it harder to fix democratic institutions than economic ones, so I’m not terribly optimistic.

I’d like to end this post with a simple proposal to fix the problem of resource allocation and magically solve economics and democracy, but unfortunately I can’t. These problems are hard – I often refer to the issue of how to coordinate decision making in large groups of people as the fundamental problem of civilisation, and I’m not optimistic about us ever solving it.

But I’d at least like us to start having better arguments about it, and I think the starting point is better understanding the nature of the problem, and start admitting that none of our existing solutions work perfectly for solving it.

This entry was posted in rambling nonsense on by .