Author Archives: david

Programmer at Large: Why didn’t they see this coming?

This is the latest chapter in my web serial, Programmer at Large. The first chapter is here and you can read the whole archives here or on the Archive of Our Own mirror. This chapter is also mirrored at Archive of Our Own.

I spent another few ksecs triaging random interesting bugs. It wasn’t the best use of my time, but it was helping build up a picture of the state space around where the problem was occurring, and even if I didn’t find anything directly relevant it was still a useful clean up task.

It wasn’t very surprising what a mess this all was given how many different lineages we had systems and parts here for, and how long we’d spent shoring things up and adding fail safes for the fail safes for the fail safes rather than risking changing vital systems, but I hadn’t explored the plumbing system this broadly in a while and it was definitely disheartening.

I was staring in dismay at some visual programming language. It didn’t render at all well on a HUD, so I had had to find one of the larger pods with a wall-screen to even start to make sense of it.

I was increasingly convinced it hadn’t been worth bothering. The program was about a gigabyte in size (I thought most of that was some sort of standard library, but I wasn’t entirely certain) and literally all it did was decide whether some valves should be open or closed based on the temperature differential on either side and how that was changing over time.

So, even though I was slightly dreading it, I was very relieved when I got the notification from Kimiko that they were able to talk now if I still wanted to.

The pod I was in was easily big enough for five people, so I invited them to come join me.

They looked… off when they came in. The HUD cues said “hesitant, nervous”, which was odd. I was about to ask them what was wrong, but they preempted me.

“So is this the conversation where you tell me you don’t want to be friends with a pervert?”

I started. That was not the opening I expected.

“Uh, no? I’m not expecting it to be anyway. I just wanted to ask some questions.”

They still seemed wary.

“OK… what sort of questions?”

It took me a moment to even figure out how on the ground they’d even figured out the context for this conversation, but it eventually hit me – if I could do the social graph evolution analysis, so could they, and it would make sense to set up some alerts so it doesn’t blindside you…

“I mostly just want to know what’s going on with Brian attacking you! Why do you just let it happen? It’s obviously off charter! And what on the ground is up with this?!”

I manifested the sex graph into a shared space and flagged down the warning my HUD was giving me about tact. I knew I wasn’t being tactful, but I was frustrated and just wanted someone to tell me what was going on.

Anyway. HUD says I’ve confused rather than offended them.

“You… really don’t know what’s going on at all?”

“If I did I wouldn’t be asking! I don’t have this science-fiction ability to read minds that everybody else seems to!”

They sigh.

“I suppose this means you’ve gone and reported this?”

They wave their hand at the graph.

“No… I probably should have, but it seemed like something I shouldn’t touch without understanding, so I thought I’d ask you to explain first.”

They huffed a relieved noise.

“OK. Good. Thank you. It wouldn’t have done anything terrible, but it’s annoying for everyone involved to have to deal with.”

They paused for a couple of seconds.

“OK. So, explanations. You understand this is about sex, right?”

“Brian didn’t exactly let me miss that fact.”

“Right. And that isn’t a problem for you?”

I shrugged.

“I’m not completely OK with it, but it’s not a big deal. It’s like… you having bad taste in music or something. I don’t approve of your choices but I also mostly don’t actually care. Does that make sense?”

They barked out a laugh.

“That’s certainly one way to look at it I guess. I can work with that. So the first thing to understand here is that you’re weird.”


I mean it’s true, but that was still quite harsh.

They gestured an apology.

“Sorry, what I mean is that you’re unusual in both your attitude and the fact that you don’t know about this already. I’m not sure how you missed it, frankly.”

They called up a bunch more graphs and visualisations. The short version is that most people felt much more strongly about this than I did, and while I wasn’t the last person to know about it there probably weren’t more than single digits of other people who had also missed it.

I nodded slowly. I could probably guess how I’d missed it – there was almost certainly some context or clue I missed that would have prompted someone to tell me about it before now. Also given my relative lack of socialisation it’s likely that Kimiko was the first person from the group I’d properly talked to. I checked HUD and it confirmed – I’d apparently met two of the others in passing but no more than that.

“OK. So if I’d reported it, the social unity people would have just told me they knew already?”

“There are a bunch of procedures they have to go through, and they would have had to make a showing of taking the report seriously, but basically yes. Even without reports the automated systems keep flagging our group up as needing attention, but as long as we don’t cross any of the hard thresholds they’re not required to take action.”

“But… OK, they’re not required, but isn’t it still their job to do something? Why hasn’t anything been done about this? If everyone knows there’s a problem surely we have to fix it?”

They sighed.

“And what would you do to fix it?”


There were a couple of natural things to do, but the most obvious and the one that would almost certainly get implemented would be to simply kick them all off the ship at the next appropriate planet.

It wouldn’t be a death sentence for them – we’d leave them with plenty of money in the local economy and set them up with a perfectly good local infrastructure. They’d have each other. They’d still be crew… but they would be grounded, probably forever. I can hardly imagine anything worse. It was why I worked so hard to fit in myself.

I swallowed.

“OK. I get why you don’t want, but what’s stopping them? It’s obvious Brian has it in for you, and I can’t imagine they’re the only one, so why are you still here?”

“Because we’re protected by the charter. The same section that guarantees anonymity of sexual acts also guarantees freedom from persecution on the basis of them.”

“It sure doesn’t look like you’re free from being persecuted…”

“And we could make that case. At which point we’re officially a minority interest group, and the people who want the charter changed have enough to make the case that our protection should be removed.”

“This seems really stupid.”

They shrugged.

“Welcome to life as an edge case.”

“No, I mean… why didn’t they see this coming? It seems… really obvious that this would happen. Why would they design the system like this?”

“Officially, politics. They had enough support to start a normatively-asexual ship when forking, but not enough support to remove the sexual protection clauses from the charter, so that’s what they went with.”

“OK. And unofficially?”

“Well… some of us think they just wanted to see what would happen.”

This entry was posted in Fiction, Programmer at Large on by .

On Efficiency

Epistemic status: I’m reasonably confident this is how the world works, but I doubt I’m saying anything very new.

Suppose you’re writing some software and you need it to run as fast as possible. What happens?

Well, often what happens is that the software becomes much more complex. Code that has been heavily optimised for performance is typically much less readable and comprehensible than the naive, simple version – a myriad of special cases and tricks are deployed in order to squeeze out every drop of performance.

That means it’s also more likely to be buggy. Complex software is harder to test and reason about, so there are many more places for bugs to hide.

Many of these bugs are likely to be security bugs – you omit checks that slow things down, you rewrite it in an unsafe language like C or C++ to squeeze out the maximum performance. In doing so, many of the things that normally guard the security of your software end up getting lost.

It also takes longer and costs more to develop. Optimisation is a hard problem that requires skill and effort and endless experimentation to get right, as well as throwing away many of the conveniences that allow you to develop software quickly.

None of this is to say that optimising software for performance is bad. Sometimes you need performance and there’s nothing else for it. The point is simply that in optimising for performance, other things suffer.

That’s not to say that all of these things have to suffer. You can optimise for some combination of performance and correctness, but you’ll probably still get slightly worse performance and slightly worse correctness than if you optimised for either on their own, and you’ll definitely get much higher cost.

When optimising for anything, some rules apply:

  1. You cannot generically optimise. Optimisation is always towards a particular target.
  2. You cannot optimise for more than one thing.
  3. You can optimise for “one thing” that is a combination of multiple things.
  4. You can also optimise for one thing given some constraints on other things.
  5. Adding more factors to your optimisation makes the problem harder and causes all of the factors to do less well.
  6. Adding more constraints to your optimisation makes the problem harder and makes the optimisation target do less well.
  7. The harder the optimisation problem, the more the factors not included in it will tend to suffer.

Better optimisation processes can decrease the amount of “wasted” effort you’re expending, but even a perfect optimisation process can only move effort around, so if you’re optimising for something by focusing effort on it, that effort has to come from somewhere and that somewhere will tend to suffer – sometimes work is genuinely wasted, but often it’s only wasted in the sense that you decided you didn’t care about what it was doing.

Less optimised processes are often more robust, because they have a lot of slack that gives room for informal structures and individual decision making. As you optimise, and make the process more legible in aid of that optimisation, that slack goes away and many of the things that you didn’t even know your system was doing go away and the system fails disastrously because one of those things turns out to have been vital.

You can see this writ large in society too.

Consider the internet of things. People are optimising for profit, so they focus on things that affect that: shininess, cost, time to market, etc.

Security? Security doesn’t affect profit much, because people don’t really care about it until well after they’ve given you money. Which is why the internet of things has become a great source of botnets.

This is a problem in general with market-based economies. In economics terms, these extra factors are externalities – things that we declare to be somebody else’s problem that we don’t have to pay for.

Much is made of the efficient-market hypothesis – the idea that markets have strong predictive power by aggregating everyone’s information. This is then often used to conclude that markets are the optimal resource allocation mechanism.

Depending on where you are in the political space, people often then conclude either “And that’s why free-market economies rule and politics should stop interfering with the market” or “And that’s why the efficient market hypothesis is false”, or some more complex variant of one of the two.

I think some form of the efficient market hypothesis is probably true, and markets are probably a pretty good optimiser for the resource allocation problem. I doubt very much that they’re perfect, but they’re probably better than most of the alternatives we have.

But remember the rules? You can’t optimise without knowing what you’re optimising for, and you can’t optimise for more than one thing.

What markets optimise for in their default state is, more or less, some money weighted average of peoples’ preferences.

A lot of the critique from those of us on the left is really objecting to that “money weighted” part, which is fair enough (or at least, fair enough when we’ve got an income distribution as unequal as we currently do), but I’d like to focus on the second part. Pretend we’ve solved the first with a universal basic income or something if you need to.

The big problem with optimising for people’s preferences is that people’s preferences are often bad.

I don’t mean this in some general objective moral sense. I mean this in the sense that often if we got more information and sat down and thought about it properly we would come to a different conclusion than we would naturally.

But we’re faced with hundreds of small decisions a day, and we can’t possibly take the time to sit down properly and think about all of them. So we mostly rely on heuristics we can apply quickly instead.

And often those are the preferences that the market ends up optimising for.

This IoT lightbulb costs £15 while this one costs £20? No obvious difference? Well, obviously I’ll buy the £15 one. Who doesn’t like saving money?

Turns out the extra cost in the £20 was to pay for their security budget, to make sure their workers got living wages and good safety conditions, etc.

Unfortunately the market optimised away from that because those terms weren’t in the optimisation target and cost was. Maybe if people had been fully aware of those factors some of them would have switched to the more expensive light bulb, but most people aren’t aware and many of those who are would still decide they didn’t care that much.

And this is more or less inevitable. It is neither practical nor desirable to expect people to think through the long-term global implications of every decision they make (though it sure would be nice if people were at least a bit better at doing this), because expecting them to be fully aware of the consequences of each decision takes currently inconsequential decisions and makes them high effort. Given the number of decisions we have to make, this adds up very quickly.

We can fix this to some degree by adding more terms to the optimisation problem – e.g. government spending on projects, liability laws for what happens when you have security problems, etc. We can similarly fix this to some degree by adding constraints – adding security regulations, workplace safety regulations, minimum wages, etc.

We can’t fix this by hoping the market will self-correct, because the market isn’t wrong. It’s working as intended.

So, by their nature, all attempts to fix this will make the market less efficient. People will pay more for the same things, because if you add new terms or constraints then your optimisation target suffers.

Free-market enthusiasts tend to use that as an argument against regulation, and to some extent it is – the market is probably better at optimising than you are, and the effects of these changes can be hard to predict. Often it will route around them because it’s easier to do something you didn’t think of than it is to do what you want. So what you will often get is a system that is less efficient and still doesn’t do what you want.

But it’s also an argument for regulation. Yes it makes markets less efficient, but when the problem is that markets are optimising too heavily for the wrong thing, making them less efficient is the point. Regulating market behaviour may be hard, but fixing people’s preferences is for all intents and purposes impossible.

We need to treat the market with caution, and to be prepared to experiment before settling on a solution. I’m not at all sure today’s regulatory environments are even close to up to the task, but as long as we are ruthlessly optimising for a goal that ignores important factors, those factors will suffer and then in the long run we will all suffer the consequences.

We could also seek another way. If efficiency is the problem and not the solution, perhaps we need to step back from the optimiser all together and find a less market dominated approach. More and better democracy would perhaps be a good starting point, but if anything we seem to find it harder to fix democratic institutions than economic ones, so I’m not terribly optimistic.

I’d like to end this post with a simple proposal to fix the problem of resource allocation and magically solve economics and democracy, but unfortunately I can’t. These problems are hard – I often refer to the issue of how to coordinate decision making in large groups of people as the fundamental problem of civilisation, and I’m not optimistic about us ever solving it.

But I’d at least like us to start having better arguments about it, and I think the starting point is better understanding the nature of the problem, and start admitting that none of our existing solutions work perfectly for solving it.

This entry was posted in rambling nonsense on by .

There are no hidden rules

Have you ever experienced the feeling that the people around you all know a bunch of secret rules? Moreover, they expect that you know them too and will punish you for not following them, but still won’t ever actually tell you what the rules are? Or sometimes they do tell you what the rules are but it later turns out that they were lying or they forgot to tell you a huge list of exceptions and just assumed that you understood those.

Yeah, me too.

(If you haven’t experienced this feeling, this post is probably not for you. You’re of course welcome to keep reading it anyway if you wish, but you might find it a bit obvious).

A friend was asking about how to figure out what these rules are the other day, specifically in the context of the workplace. I gave an answer, but the more I thought about my answer the more I realised that I hadn’t ever really internalised it or thought through the broader implications.

The answer is this: There are no hidden rules. This entire feeling is an illusion.

The feeling of confusion is real, and is caused by a real problem, but the problem can not be accurately described as there being “hidden rules”.

Why? First, we’ll need a brief digression on the nature of rules.

Rules mostly come in two forms: prescriptive and descriptive. A prescriptive rule is explicit and enforced. A descriptive rule just describes what is observed to happen.

Or to put it another way, when people are observed to violate a prescriptive rule, the people violating it are at fault. When people violate a descriptive rule, the rule is at fault.

When we’re worried about hidden rules, we’re worrying about prescriptive ones – we think that there is some secret rule book that we are bad for not following. It is more or less impossible that this could be the case.

In order for such a rule book to exist, people would have to be getting together behind our backs and conspiring to come up with and enforce these rules, and then not telling us about them. I am reasonably sure that this is not happening.

There is still a rough shared consensus, but it arises more organically than that. People come up with their own ideas of how things work by observing others and talking to them. Sometimes they have explicit conversations about how things are actually supposed to work, but it’s relatively rare, and over time a rough sort of knowledge builds up about how people are supposed to behave. But it is at best a rough consensus – different people start with different conceptions of it, and these mutate and change as they move from person to person.

The result is that everyone is operating with a different – sometimes subtly different, sometimes wildly different – set of assumptions about how people should behave, and the resulting mess does not resemble any sort of unified prescriptive set of rules that people follow. Instead of a single rule book, there’s a vague amorphous mass of roughly acceptable behaviours that contains a mix of vague consensus and massive individual divergence.

So if we want some rules that tell us how to behave, we’re left with descriptive rules – we need to somehow take all of the complex and varied space of human behaviour and boil down aspects of it to a simple set of formulae.

I don’t know if anyone’s mentioned this before, but it turns out that “solving literally the entire domain of social science” is a hard problem.

So there is no rule book. How do people even function?

Well, they mostly just get on with things. They do some complex mix of what they want, what they feel they should, and what they think they can get away with. They reward behaviours that they like, they punish behaviours that they don’t like.

The result is a system where the range of permitted behaviour is more a function of the people you’re interacting with, how they interact with each other, their individual wants and preferences, and a massive amount of context.

Is something permitted? Well, it depends. Who doesn’t want you to do it, who does want you to do it, and what are the relevant balances of power between those people? Or are you individually powerful enough that you just don’t really have to care in this instance and can do it anyway?

And this mostly works. People generally want to get along, so the group roughly stabilises itself into a functioning pattern that is more or less compatible with everyone’s behaviour, and self-corrects when someone goes too far out of line.

There will probably end up being some broad areas of acceptable and unacceptable ranges of behaviour – interactions that have been done often enough that they’re largely understood and people know how to act in them – but if you ask people what they are, they’re still going to have to figure out “the rule” based on observation: It is still a task of description.

Note that this is true even when people think they already know the rule. They just did it earlier.

People are good at not noticing the special cases to the things they think are true, so if you ask someone for the rule and they tell you, you’ll often spot exceptions. If you ask about those exceptions you’ll usually hear “Oh but that was a special case/justified because of this reason”. Often that reason also has exceptions. It is very unlikely that the person you are asking consciously “knew” all these exceptions before you asked, they just took them in stride as they happened and didn’t notice that they contradicted what they thought was true.

And yet, it sure seems like they have some sort of special insight that we lack, doesn’t it?

Well, they do, but that insight isn’t a rule book, it’s more awareness of the social context and better intuition about how other people are going to behave, usually based on the idea that others are going to behave in a broadly similar manner to how they would. If you’re coming into an unfamiliar cultural context (e.g. due to transition or immigration, or even just joining a company with a very different business culture) there is probably also a lot of shared history that you like, and assumptions that have built up over time from that.

But social science is still hard, and better data and intuition aren’t enough to change that, so that doesn’t actually mean they’re going to be right, they just might be a bit more accurate than we are.

Or they might be less accurate. Often when people try to make the implicit explicit they end up whitewashing reality – they give the nice version of the rule that they want to be true, which ignores a lot of the messy special cases they feel bad about. e.g. people will tend to tell you that lying is bad and of course you should be honest. Except for those little white lies and special cases that are required for social niceties. And naturally you exaggerate your performance to look good because everyone else is. Oh and when people say this what they really mean is… etc.

Even if they believe the rule is that you should always tell the truth, the practice does not follow and the more you tease them out on it the more they admit that the “real” rule has endless special cases and caveats and exceptions and actually once you consider all of the things that “don’t count”, people are lying to each other all the time.

The point is that “hidden rules that everybody knows” is not a useful mental model of what people are doing, because they don’t have access to any hidden rules either. They don’t have any sort of general theory or specific knowledge of what’s going on, and as a result any attempt to get them to tell you what the rule is will be futile.

Does this all sounds very bleak and unhelpful to you? It does to me, but I don’t actually think it is.

I think instead it’s bleak and helpful.

The most helpful thing about it is that you can just stop looking for the true hidden rules. It’s a waste of time. There aren’t any.

That isn’t to say rules can’t still be useful, just that it’s important to understand their nature before applying them.

You can think in terms of rules for you if that helps – things that make it easier for you to navigate the social situation if you follow them. As long as you understand that these are yours and under your control, and don’t expect other people to follow them, this can be very helpful.

You can try to come up with descriptive rules for how people will behave, but don’t sink too much time into making them accurate – it’s better to have an easy rule that you understand isn’t perfect than a slightly more accurate complex one that is much harder to work with and still isn’t perfect.

Often these will be quite different from the rules that people tell you if you ask. It can still be worth asking – the answers are useful data and might be worth a try – but you shouldn’t treat the answers as automatically correct.

These can be particularly useful when coming into a new context. When you’re in an unfamiliar situation, having these sorts of approximations to the shared cultural assumptions can be extremely useful for getting a basic handle on the expected behaviour until you manage to acquire a more intuitive sense of this. If you can find other people who have had a similar context switch to you, talking to them and asking what they think the rules are is also often more helpful.

This isn’t the scenario I usually find myself in, so my advice here is necessarily a bit vague. For me the problem, especially historically, is more often that I feel this way in scenarios that I have plenty of cultural context for but remain more than a bit mysterious anyway. I think this is a pretty typical experience for people with autism or who otherwise have minds and personalities sufficiently different from the norm that predicting that other people will behave more or less like you doesn’t really work.

And rules will only get you so far here. The thing to look at when you really want to understand how to navigate a group isn’t rules, it’s people. What they want, how they interact, and how those desires reinforce and conflict with each other.

This is unfortunate, in that people are much harder to understand than rules, but it does have the advantage that it might actually work.

This entry was posted in life on by .

New Fiction

I ended up writing a new story about Vicky Frankenstein. It’s called Pillow Talk, and is almost entirely Vicky and Ada talking about their relationship.

I’m not sure I was ever expecting to write a story about lesbian romance (well, Ada is bisexual. I don’t know if that’s canonically true, but having her be a lesbian would be somewhat surprising given her historical record. Also she predates modern labels for sexuality and thus might choose to self-describe differently). I lack a number of qualifying areas of knowledge for doing it well, but it seems to have turned out OK anyway.

Vicky continues to be extremely fun to write, and I’ve already ended up starting on a third Vicky story, the topic of which is largely inspired by this tweet (the original Vicky story was partly caused by a joke tweet too. It seems to be a theme).

This may end up eating into Programmer at Large time, though nominally I’m scheduled to write another one for this week some time. I am absolutely intending to finish Programmer at Large, but I seem to have made the whole thing bleaker than I originally intended, which combines poorly with the fact that I’ve been quite busy for the last month.

This entry was posted in Fiction on by .

Shutting down my Patreon

Earlier this year I started a Patreon as an experiment. The logic was that I like writing and I like getting paid, and I wanted to see if I could get these two great flavours to combine.

Long story short, I didn’t.

I set a goal for $500/month by the end of the year or I’d shut it down. It’s now obvious that I’m not going to get anywhere close to that, and I’m a big believer in stopping when you know failure is inevitable rather than actually waiting to fail.

But also, I’m finding that I’m not doing a good job at managing it (apologies to all my patrons. this is entirely my fault), and that it makes me feel worse about the process of writing. A lot of this is because I find it weirdly stressful, which is nobody’s fault but mine but also not something I can do much about.

So the result is a system that stresses me out and is probably going to earn me less than one day’s worth of work over the course of the year. This is is a bad deal any way you look at it.

So I’m going to stop. No hard feelings on my part, it’s just an experiment that didn’t work out. If anything, I’m extremely grateful to all of the people who supported me along the way. I apologise to any of you who are disappointed by this.

I’ll keep blogging here, obviously, but I’m going to put the rate back down to probably more in the region of 1/week (at least, that’s what I’m going to set the Beeminder goal back to).

I will leave the Patreon page up for one week to give people time to see this on the feed as well and grab anything they want from the private archives, but will then close it before anyone is billed for this month.

This entry was posted in Admin on by .