Category Archives: Uncategorized

Maybe I’m doing it wrong

There’s a joke I sometimes tell at family gatherings.

I have this vision of my ancestors of Clan MacIver. We’re preparing for battle. A line of hairy bearded Scotsmen on the top of a ridge. We face the enemy, raise our axes, and as one we charge, shouting our fierce battle cry.

“NO. YOU’RE DOING IT WRONG!”

This joke is of course wildly inaccurate. Setting aside the ludicrous cultural stereotyping, I think the people in our family who the you’re doing it wrong tendency come from most are on the maternal line (my mother and my father’s mother) – my father and his father before him are/were definitely perfectionists, but it’s more self-directed than that. But still, annoyance at other’s mistakes and failure to understand The Right Way Of Doing Things is definitely a family thing.

And, as you’ve probably noticed, I exemplify it somewhat. I don’t think I always know the right way of doing things, but boy do I have opinions. And I will often rant angrily about them. Here, on twitter, on IRC, in the pub, if you happen to stop me on the street to ask for directions, etc.

And for some reason it’s taken me 30 years to start to wonder… is this useful?

As far as I can tell, about 80% of our industry, and I suspect that number is generously low, is suffering from at least one of Dunning-Kruger or Impostor Syndrome (I’m of course in the remaining 20%. Yes. For sure. After all, I wouldn’t self-describe as being either of those, so I must be in the 20%, right?).

So when I shout angrily about how people are shit at writing software, who is listening?

On the one hand we’ve got the overconfident types. They might be listening, but of course they’re already doing everything right, so while they may treat my anger as a hilarious spectator sport, it doesn’t really apply to them does it? Really I’m just giving them another tool to beat people with.

On the other hand, we’ve got the people who are already insecure about their own skills. They’re afraid they’re shit at software development, and they’ll take angry criticism as validating that this is the case. I know from past experience (learning to drive) that having someone shout at you about how bad you are at the thing you know you’re bad at is a good way to completely destroy what little confidence you have left and ruin any chances of getting good at it by making you give up and go do something else.

So when I shout at people, who is listening, and what does it do to them?

There’s certainly a place for anger, and it’s a good tool for getting people to listen long enough for the mind control to take effect. But when it’s a post about making yourself better at a useful skill, who would I rather derive the benefit from it? And even if I do want the larger community to better itself, shouldn’t I be more careful about the people caught in the crossfire?

This post is full of questions, and I don’t really know the answers. I dropped out of my mind-reading classes because the instructor shouted at me too much, so I don’t actually know how my shouting affects people. But it’s a possibility that I’m worried enough about that I’m going to keep an eye on it. There probably aren’t going to be any overnight changes, but there may be a gradual shift in tone.

This entry was posted in Uncategorized on by .

How to submit a decent bug report

This accidentally shoe-horned its way into another post I was writing. It didn’t quite fit there, so I thought I’d pull it out into its own thing. It is as a result fairly brief.

This is in my opinion the correct format for submitting a bug. Deviations are possible, but if you are unsure whether or not you should follow this format you should probably follow this format. If you are unable to conform to this bug format due to not having enough information or not having the time to acquire all of it, you can submit a truncated form, but bear in mind that the maintainer probably has less time than you do because they’re having to deal with all the other truncated bug reports people have submitted and is likely completely unable to obtain the information that you could obtain with a little extra effort.

Subject:

When I <DO STUFF> <THING HAPPENS> instead of <THING THAT SHOULD HAPPEN> (variants permitted)

Body:

I did this thing (be reasonably detailed. A sequence of precise steps is helpful here. the shorter you can make it the better, but make it short by finding a short sequence of steps to reproduce, not by eliding details).

This is what happened.

What I expected to happen was this.

(If appropriate) I have attached a file which recreates the problem

(if available) Here is the exact error output from the program

(if you have got everything else right first) Here is what I think might be happening

Salient features

  • The bug report contains enough information to reproduce it. Granted some bugs cannot be reliably reproduced. In this case, say “when I do this, this sometimes happens”. Most bugs do not fall into this category. I’d say even most bugs you think fall into this category don’t fall into this category.
  • The bug explains why the experienced behaviour is not what you expected. This allows the maintainer to easily distinguish whether the bug is in the software or in your understanding of the software.
  • The main portion of the bug report is purely factual. It does not speculate as to causes. It says “This happened. I expected this to happen instead”. You are welcome to speculate as to causes after you have done this, but front load with the information the maintainer actually needs. They are better placed to figure out the cause than you are. If you start with speculation then you are at best making them work harder to find out what the problem in your bug report is and at worst you are sending them down a false trail.

Why should you do this? Well, because it will make the maintainer’s life easier of course, and out of respect for your fellow human beings you should always strive to do that.

Not a good enough reason? OK, fine. Try this one:

People who develop software are doing many different things. They’re working on new features, responding to bugs, they probably have other things on the go. Unless you are literally their top priority (and even if you think you should be, you probably aren’t), or they have the patience of a saint, they will spend a finite amount of time working on your bug until it’s fixed or they give up. This means you want to control three things:

  1. What that amount of time is
  2. The percentage of that amount of time spent working on the bug rather than on your bug report
  3. The probability that this process will terminate in “the bug is fixed” rather than “they give up”

How do you do this?

You do the first one by inclining them kindly towards you. You can do this in a single bug report by making it as useful as possible. A history of good bug reports will do it even more effectively.

A good bug report will also maximize the amount of time spent on the bug rather than the report because it is clear, does not elide details, and does not make you hunt amongst a sea of speculation for the salient features.

A good bug report will also maximize the chances of success: The clearer and simpler you can make the problem, the easier it is for the maintainer to identify what’s actually going wrong and fix it.

Also, as a bonus, developers can usually distinguish between a bug report that is likely to waste their time and a bug report that is likely to be useful fairly quickly. If nothing else, bug reports likely to waste their time come from people who have previously tended to waste their time. When prioritising, which do you think they are likely to look at first?

So. This is how to submit a good bug report, and why to submit a good bug report. I bequeath this knowledge to you. Use it wisely.

This entry was posted in Uncategorized on by .

Notes on aspirational mental models

So I wrote a thing for the Lumi blog explaining how the new personalised topic timelines work.

I’ve really enjoyed implementing this chunk of work. Part of this is that I get to play with recommender systems and scoring algorithms and other fun stuff like that. Part of it is that I’ve been enjoying the process of developing it. It’s been a while since I’ve got to come into an unfamiliar code base of this complexity and been given the mandate to make sweeping changes to it (the last time would have been probably about 4-5 years ago at Trampoline Systems when I had only about 2-3 years of dev experience under me and was much less sure what on earth I was doing). So as well as doing the work I’ve been watching myself do the work and observing how I’m applying the tools of thought I’ve built and sharpened in the meantime to the task, and seeing if there’s anything interesting emerging that I want to pull out of the process and reify into something I can use again.

Well I spotted an interesting one. It’s something I’ve definitely found myself doing in the past, but I’d previously done it in a reversed order that caused me not to notice: When I design a system I usually start with an abstract model which I think is powerful enough to do what I want and flexible enough that I can modify it when I find that it doesn’t. So I start with a mental model of how the system is going to work and then add patches to it as I implement it and encounter how the real world is messy and complicates my nice abstract design.

Sometimes it turns out that the abstract design is flawed and I just have to replace it. Most of the time though I find it basically works except details, and those details are ones you can ignore unless you’re actually looking right at them, so I basically store it as “Here’s how to think about it. Here are some details that deviate from that”. It’s essentially what I said previously about having mental models as variations on mental models I’m already familiar with except the mental model I’m patching against is for something that never really actually existed.

What I’ve caught myself doing with the timeline work is reversing this process. Thinking back, this is definitely something I’ve done for a while but I’ve not been quite so overt about it and it feels like it’s worth drawing attention to.

Essentially the idea is I’ve built a mental model of how I want the timelines to work rather than how they actually work. Oh, I’ve got a mental model of the grim reality too, but I keep it alongside the shiny pretty one of how it’s supposed to work. I haven’t just pulled this out of thin air, I’m pretty sure it’s quite close to the intentions of the original authors. Certainly I talked to them heavily in building it. Bits of it though are definitely wholly of my creation.

How do I use this mental model? Well part of it is just as a vehicle for understanding – well abstracted models are easier to reason about. But I’m not that bad at reasoning about the detailed realistic model either.

I think the main use of this model though is that it’s a vehicle for change. When I’m trying to reason about how to improve things or make a new feature, I first design it against the aspirational mental model of how I want things to work. I then check it against the real model of how things actually work. Sometimes the result is that actually my idealised version is completely impractical and I stick it on the shelf of ideas that don’t currently work but I should remember to look at again later. Sometimes the result is that it will work fine with no real difficulties and I just implement it.

Often though the result is something more interesting: It will almost work, but there’s this detail where the real and the idealised versions of how things work don’t quite line up.

This can then be resolved in one of two ways. One is that this has exposed a flaw in the aspirational mental model and I have to update it (this doesn’t happen too often). The other is that I fix the code to look more like the aspirational mental model.

And this is really what I mean when I say that the model is aspirational. It’s not exactly a design document. It’s more of a tool to guide refactoring. It lets me simultaneously have the “system I would design if I could do a complete rewrite” in my head and not do that complete rewrite but still get about 60% of the benefits of doing so by fixing it whenever the fact that I haven’t gets in the way.

I’m still observing the effects of this and trying to decide whether it’s a genuinely good idea or whether I’m over-fitting, but so far it seems pretty good.

This entry was posted in Open sourcing my brain, Uncategorized on by .

Warning: This blog has secret mind control powers

I’d like to talk about writing ethics. I do some things which are obviously manipulative when writing this blog. I don’t know how transparent they are (if you’re trained in rhetoric they’re probably charmingly naive, though they might still work anyway), and to be honest I often forget that I’m even doing them – they’ve just become second nature – but they’re there in almost every post that is designed as a persuasive piece.

This post comes off the back of a twitter conversation with Jay about my use of “we” in “I am angry too“, and how it was essentially a rhetorical trick to claim legitimacy and get people on board with my point by identifying with me. It’s a fair cop. We then had a good discussion about our use of rhetorical tricks like this and whether this was OK. Here is something she wrote about this issue a while back.

I think it’s an important question. I believe I know my answer to it, but it’s still worth thinking about the issue.

But first, a digression.

In order to understand the ethics of this, and in order to understand why I do things like this, we first need to go meta and ask another important question: What is the purpose of this blog?

I’ve long maintained that the sole intended audience of this blog is me and that the rest of you are just along for the ride. Once this might have been true but it’s been a transparent lie for some time by now. Although many of my posts fall into this category (lets be honest, none of you care about my efforts at game design), a substantial proportion of my writing is obviously designed to be persuasive. I’m not usually trying to persuade myself here, so clearly there must be another intended audience and another intended purpose.

Thinking about it a bit more, the purpose of this blog becomes clear: The purpose of this blog is, of course, to invade your brain and take over your thoughts.

I don’t mean I’m trying to use it to turn you into mindless zombie drones obedient to my every whim or anything. That would be silly. I mean really, the very idea.

I have way better means of achieving that than mere writing.

No. What I want is not to make you obey every command I give you. What I want is for you to autonomously make the decisions I would have commanded you to make without my having to bother.

You think I’m joking. That’s OK. I get away with a lot because people think I’m joking. But, for once, allow me to persuade you that I am in fact deadly serious.

In order to do that we will have to step up a level again.

What is the purpose of persuasive writing?

Um. It’s to persuade, isn’t it?

Actually, no.

You see, if you were persuaded by a piece of writing then that means that either you didn’t really care very much about the subject matter in the first place or you already 80% agreed with it. It is essentially impossible for someone else to change your mind about an issue you believe strongly on.

Convincing fence sitters and people who are already mostly on side is certainly still useful. It bolsters your numbers and gets you more aligned along a single purpose. But these are fringe benefits. They’re useful, but they’re not really the main point. What you want is to get people who disagree with you to change their minds.

Pity it’s impossible, huh?

Fortunately, it turns out though that even though you can’t persuade people to change their mind, you can do the next best thing: You can make them persuade themselves to change their mind.

How does this work?

It’s simple really: You make them listen to your point. You make them remember your point. If you can you even make them believe it for an instant – not so it sticks, but so that later when they come to the scenario that you described and have cause to check what they believe about it they remember that they could have believed differently in this case. And then they question maybe whether they should.

You see, when you wrote that memorable piece, you may not have persuaded them, but you planted a seed in their brain. If it stuck then when your point is relevant to a situation they find themselves in, that seed will grow. It has found a chink in the foundation of their refusal to believe you and it can take root there, growing as each new experience feeds it. Eventually those foundations will crack and they will discover that maybe they agree with you after all.

Will that seed flourish? Probably not. But plant enough seeds and it hardly matters. That one may not have stuck, but another one might. The seed you planted in their friend might, and they might have another go. It’s not inevitable, but if you’re good at it, if you’re persistent at it, and if you’re simply lucky enough, eventually you will have a forest.

Why am I doing this? Why is it so important that I plant ideas in other people’s brains and persuade them around to my own point of view?

Simple, really. It’s because I want to take over the world.

Oh I don’t want power. Frankly, power sounds immensely tedious. What I want is much more subtle: I want the giant global brain to look like me.

You see, the world is broken. It’s broken in big ways, it’s broken in small ways. Some people don’t log stack traces with their exceptions, some people kill others for having a sexuality they disapprove of.

I am not very good at fixing the world. Some of this is just a lack of power – I’m just one person, not especially influential, I’m comfortable but not exactly rich, so the amount of actual ability to change the world I have is relatively limited. A lot of it is personal failings: I’m not great at taking action. I tend not to overcome the initial hump required to motivate myself to do so, I tend to paralyse myself with indecision over which cause to commit myself to. A variety of other issues you don’t care about.

But one thing I can do is influence other people to change the world for me. I mean sure, my audience isn’t exactly huge – I’d guesstimate that most of my serious posts get about 100 readers. Some of my more popular ones get more traffic than that, but I’ve seen the stats and most of you aren’t reading them. For example, my post about gendered anger has had about 1400 unique visitors so far, but of those three quarters have left the site in under a minute. Still, that’s 300+ readers. I don’t think hoping I’ve planted one or two seeds that will flourish is too over optimistic.

Is all this effort really worth it just to change a handful of minds? Sure. It took me maybe an hour to write that post. If all that results from that hour is that I’ve made a couple of people’s lives substantially better, job well done.

But that’s not all. You see, if I change those handful of minds, they have now become vectors for my zombie plague. They will talk to other people about this, maybe try to persuade them about it, maybe just link to my original post. I don’t care how they do it. The point is that by changing your mind you have not just changed your mind, you have implicitly committed to changing other peoples’. There’s no obligation of course – it’s not like you have to become evangelical about every idea that you become convinced of, but that doesn’t matter. It will happen anyway. You will talk to people, you will discuss ideas, and sooner or later those ideas will spread like a plague and eventually the world will be in the thrall of my all powerful grasp.

Err. I’m sorry. I got distracted there. What were talking about?

Oh yes. The ethics of rhetoric.

Let me present you with a scenario. You are facing an opponent in a friendly duel. You each have a short stick. Your goal is to be the first to poke your opponent with the stick.

Now suppose I offer you a stick that is twice as long as your opponent’s. Do you accept?

Of course you don’t! You’re a decent human being. You believe in fairness. You don’t want to cheat do you?

Now suppose I tell you that if you lose the game I will kill all your family.

Oh you want the stick now? Cool. Here you go.

There’s this notion that using rhetorical tricks is somehow unfair, or cheating. This seems to be particularly common amongst the scientifically minded and the left.

Personally I think this is wrong and is a fundamental misunderstanding of how language works and what it’s for. But that doesn’t matter.

What matters is simply this: Fairness is that thing you do when you’re playing games. It’s what you do when what you’re doing is unimportant.

I’m not saying everything I write about is earth shatteringly important and whatever means I choose to employ are justified. This is patently and obviously false.

But what I am saying is that I think that what I’m writing about is important enough that I’ve already chosen to invade your brain and take over your thoughts. On top of that, what’s a few subversively chosen pronouns between us?

Enjoy your new trees.

This entry was posted in Uncategorized on by .

Valid arguments from my opponent believes something

(Title a reference to this post by Scott Alexander, though actual content of this post not so much)

I’ve been talking with Pozorvlak and Paul Crowley about this on Twitter. I wanted somewhere slightly longer than 140 characters to express my argument, so here you go.

My thesis is this: Given someone has known extremely false beliefs (homeopaths, young earth creationists, global warming deniers, people against social healthcare, etc), that they believe something should in many cases be taken as weak evidence against it.

Certainly not strong evidence against it, and certainly not enough to override other more compelling evidence (if they tell me the sky is blue I will nevertheless continue to believe that the sky is blue despite their having provided me with weak evidence against it), but evidence nevertheless.

Why?

Two reasons: First, they have demonstrated that their judgment is suspect. Therefore their beliefs should at most be taken as extremely weak positive evidence for something depending on how much judgment is required (I am more likely to believe “it was quite warm  yesterday” than “the current pattern of warming is part of a natural cycle” for example).

Secondly: They have a process for producing false beliefs and eliminating true ones. Thus one false belief tends to seed many.

What is this process? Well, there are two actually, which both feed on each other.

The first is simple: People want to have consistent belief systems and want to be able to argue for their beliefs. Therefore if they believe one false thing, they will look for facts to justify it. Through a process of motivated reasoning and cherry picking of data they will happily find all sorts of “facts” that support their opinions. So a belief held by  such a person may well have been selected by this process in order to support more overtly false beliefs. Similarly, true beliefs are likely to be rejected where they disagree with false beliefs, so as well as being more likely to believe false things they are less likely to believe true ones.

Secondly: People with false beliefs tend to congregate. Sometimes around their specific area of their false belief, sometimes around related areas, sometimes just because they’re “anti-establishment” and want to show some solidarity. Unfortunately this means that our people with false beliefs who we have previously mentioned have poor judgment in discerning truth-hood are awash in a lovely warm bath of false memes. Some of these will catch and they’ll get a whole new set of false beliefs to hold, which will then feed back into their process of building a self-consistent belief system and generate new ones. Wash, rinse, repeat.

This process tends to focus on areas related to their existing false beliefs, so you should definitely treat proximity as a signal for how heavily to treat another belief as evidence, but the social aspect definitely means that it’s not confined to that exclusively.

In general I would probably not apply this heuristic too aggressively – it’s easier to just not believe what they say one way or the other – but I find it sometimes useful to bear in mind.

It’s also important not to apply this heuristic to people you merely disagree with, or just think are a bit dim. It’s entirely possible for reasonable people to disagree and it is unproductive to react to this by disagreeing with everything your opponent is saying. This heuristic is reserved for people whose beliefs are so out there that you cannot see a reasonable way to believe them.

This entry was posted in Uncategorized on by .