Category Archives: life

A two person agreement protocol

The problem is this: You have two people, many options. Each of them have preferences amongst those options. How do you decide which is the best one?

I mean sure you could talk to each other. But that’s way too straight forward. Also it introduces some weird biases (e.g. the person who is better at persuasion is more likely to get their way). Wouldn’t it be nice if there was a simple system which let you come to a mutually acceptable agreement?

One option for this is yootling, but it would be nice if we could do this with democracy instead of economics. But how do you democracy with only two people?

I should say, I didn’t come up with this idea. It was a question posed to me by James Aylett while we were on /dev/fort, and he also suggested the solution. The idea stuck with me, I decided to think through the implications, and I realised that this was actually a very nice way of doing things.

The solution is to apply a standard voting system called Majority Judgement, which turns out to reduce to a pleasantly simple form in the two person case. It works as follows:

  1. For each of the options, each person writes down a grade. In classic majority judgement there are six grades (Reject, Poor, Acceptable, Good, Very Good, Excellent). Personally I like to drop the “Very good” option down to 5 grades. It doesn’t matter excessively though. You can even cut it down to three if you like (Bad, OK, Good).
  2. Score each option with the smallest grade either of you assigned. So if I rated something good and you rated it bad, it’s bad. Drop everything that has a strictly worse score than some other option (so if we have two options which got scored as good, drop everything that didn’t score good).
  3. If we only have one option left, that one wins. We’ve picked the most mutually acceptable option.
  4. If we have more than one option left, repeat the same thing amongst the remaining options only this time use the highest score either of assigned.
  5. If we still have more than one option, there’s nothing to choose amongst them. Either fall back to that talking thing or pick at random.

I think this sounds more complicated than it really is. Lets do a worked example. We want to go for a drink. Our options are:

  • Land o’ beer
  • Gin gin gin
  • Ye Olde Hipster
  • Whisky for you

We each score these:

Watering hole Me You
Land o’ Beer Reject Excellent
Gin Gin Gin Excellent Bad
Ye Olde Hipster Acceptable Good
Whisky For You Excellent Acceptable

This gives us the following scores:

Watering hole Round 1 Round 2
Land o’ Beer Reject Excellent
Gin Gin Gin Bad Excellent
Ye Olde Hipster Acceptable Good
Whisky For You Acceptable Excellent

So in the first round Land o’ beer drops out (because I hate it) and Gin gin gin drops out (because you hate it). This leaves “Ye Olde hipster” and “Whisky for you” remaining because we both find it at least acceptable. In the second round, we are choosing only between these two options. I rated Whisky For You excellent, whileas you consider Ye Olde Hipster merely Good, so we go for Whisky For You. Yay, Whisky.

I’ve not actually tried this in practice, but I suspect it might work quite well. And this of course generalises naturally to more people given that it’s actually a voting system designed for more people (one caveat: When generalising to > 2 people you don’t take the smallest score: You take the middlemost, rounding to the left. So e.g. if there were three of us and we rated something as Bad, Good and Excellent respectively, the rounds would go Good, Bad, Excellent)

This entry was posted in life, voting on by .

A personal history of interest in programming

Apparently some guy steeped in silicon valley startup culture has made some ignorant comment about how everyone who is good at programming started from a young age. This is my surprised face.

If you don’t know who I’m talking about, don’t worry about it. He’s not important. This is a pretty common narrative though, and I thought it might be worth holding up my personal journey to being interested in programming as a piece of anecdata against it.

I’m basically the kind of obsessive programmer this criteria is supposed to select for. I’ve spent my Christmas holidays working on persistent functional data-structures in C, I spend a huge amount of time and thought on the practice and theory. Also I’ve got all the privilege such criteria implicitly select for – I’m white, cis male, and painfully middle class.

You don’t need any of these things to be a good programmer. I mean, you sure as hell don’t need the pile o’ privilege I’ve got, but you also don’t need the obsession. It can be helpful, but it can also be harmful – you put a group of obsessive people together and often what you end up with is hilarious tunnel vision which loses sight of the real problem. I mention these things not because I think they’re important, but to demonstrate that even if you buy into the dominant narrative I am a counter-example. I look and act like a “real” programmer.

Want to know when I got started programming? Well, that’s hard to say exactly. I did little bits and pieces of it as a kid (more on that later), but I wasn’t really very interested in it. More on this later.

Want to know when I got interested in programming? That’s much easier. 2007. I was 23, about 10 years older than I’m “supposed” to have been interested in programming in order to be good at it.

How did this happen?

Well I’ve always liked computers. My family have had computers since quite a young age (see “privilege” for details) – I remember having a windows 3.1 computer before I moved to the UK, so looking at the release date I must have had one almost as soon as it came out. Before that we had a DOS computer.

I really enjoyed these, but not really for programming. I played games and used the word processor. We had some games written in qbasic and I vaguely poked at the code to see if I could make changes but I didn’t really know what I was doing and it was pretty much just editing at random and seeing what happened. I got bored quickly and didn’t pursue it further.

Years later at school we did some logo. It sure was fun drawing pretty pictures on the screen, and the little turtle was adorable.

Later yet we got the internet. This was pretty amazing! You could talk to people! Who were far away! And write stuff and people would read it! Also there were multi-player games, those were pretty cool. I spent a lot of time on various MUDs. I thought about learning to program so I could make a MUD or to build add-ons to one. I think I briefly even started contributing to a friend’s nascent MUD, but it all seemed like hard work and again I got bored pretty quickly.

Eventually it came time to choose my A-levels. I thought about doing a computing A-level, but my chemistry teacher persuaded me that I could learn to computer at any time (he was right) and that I should do chemistry instead (oh ye gods was he wrong). So I managed to go another couple years without ever learning to program.

At some point during this time I created a website. That isn’t to say I learned HTML and CSS mind you. I used some sort of WYSIWYG website creation tool from Mozilla and uploaded it to my btinternet account. I can’t really remember what I put there – I know I published some really terrible transhumanist fiction on it, but there were a bunch of other things too. I do remember the look though: White text on some sort of horrendous patterned background. Classy web design it was not.

At some point our btinternet account got cancelled and I lost the website (no backups of course). When I looked a few years ago bits of it were still available through the internet archive, but I honestly don’t remember the URL at this point.

I got to university and somehow found myself in a maths degree (which is another story). I now had my own computer which my parents had bought me (*cough* privilege *cough*)!

It had windows ME on it. :-(

After much frustration with ME, my friend Tariq Khokhar persuaded me to give this Linux thing a try in preference to, erm, “acquiring” a copy of windows 2000 and using that instead.

This worked out pretty well. I’m not going to say it was a flawless experience, but as a cocky teenager I was totally up for doing low-effort things that made me feel much cleverer than I actually was, and running this frustrating OS on the desktop certainly achieved that. This was back in 2001. 12 years later, I’m still being frustrated by this decision but find all the other options equally frustrating.

Then, finally, I was forced to actually learn to program.

Cambridge has (had?) as part of its maths degree a thing called “CATAM”. What this was was basically they gave you  your choice of three problems to do and you had to go learn to computer and then do them.  I don’t actually remember what my CATAM projects were – I know the first one was something to do with matrix maths, but I’m not really sure what.

You could do them in whatever computer you liked – they vaguely encouraged C (sadists), but they didn’t really mind if you did it in something else – I know at least one person did it in Excel.

I briefly looked into programming in C, but Tariq persuaded me that I would much rather do it in ML, which is what the computer scientists were learning for their intro to CS course. I went along to a few of their lectures, skimmed some of the notes, and bought a copy of ML for the working programmer and learned enough from it to solve the CATAM problems.

To be clear: At this point programming to me meant that I wrote up some code in a text editor (gedit I think. I don’t remember when I started using vim) and then cut and pasted it into the REPL (I think I was using Moscow ML because it was the one the course recommended), at which point I would copy the answer out of the repl and put it in my coursework.

That CATAM project finished, I did fine, and I immediately stopped programming again. I mean why would I keep it up? As far as I was concerned this was just a thing I learned to do to solve some maths problems. None of the maths problems I had to solve needed programming, so I wasn’t going to be programming.

I picked it up again for my second CATAM project the next year, but I didn’t really learn anything new about programming, I just did the same thing I did last time: Write in a text editor, copy and paste into the REPL. I might have switched to vim by then, but probably not.

At some point during this period I made a second website. My friend Michael, who I knew from IRC, persuaded me to do it “properly” this time. I learned what HTML was, copied some CSS from someone’s free designs elsewhere on the internet, and wrote about 5 lines of PHP (which I probably copied from somewhere) to build a hilarious navigation by get variables so I didn’t have to write the same header and footer everywhere and could use includes. I was supposed to use mod rewrite or something to get proper URLs but I never really bothered. Given my experiences with mod rewrite since, it’s possible I tried to make it work and couldn’t. I uploaded this to my account on the student computing society computers. At this point I’d learned enough about the command line from my forced immersion in it that I was able to figure out SSH and stuff, but I don’t think I did much more with it than moving some files around on the server.

Then I got to the end of my degree and suddenly didn’t know what to do with my life.

Everyone tells you that a maths degree is super desirable and is great for getting jobs. This is a lie. If you have other useful skills then people get very excited about the fact that you’re also a mathematician. If you don’t, no-one really cares.

Except banks. Finance was very keen to hire mathematicians. I, on the other hand, was very keen not to work in finance (because dad was a banker and it didn’t look like much fun. My political objections came later).

I shopped around Cambridge for a job for about 6 months (moving back in with my parents about halfway through that) but found that for some reason Cambridge had quite a surplus of overqualified people with no useful skills and I was one of them. Finding a job didn’t go so well.

At some point a friend mentioned that his company were hiring for developers and didn’t care if you actually knew what you were doing because they were pretty sure they could teach you. They were London based, and I wasn’t very keen on London, and I wasn’t really sure I wanted to be a developer, but I’d been job hunting for 6 months and it’d got pretty depressing so I figured it couldn’t hurt to talk to them. They were going to be at a careers fair in the computer science department, so I went along to that with a copy of my hilariously empty CV to talk to them.

Nothing came out of talking to that company, but while I was there I figured I might as well talk to a bunch of the other people there. I ended up talking to two other companies, both London based and hiring for developers but happy to teach people, so I interviewed with them.

Apparently I interviewed pretty well – the ML I’d learned and my maths degree were enough that although I didn’t really know what a binary search was I was able to figure it out with a little prompting, and I somehow managed to muddle my way through the other problems. One of them mostly tested problem solving, which I was good at, and the other had some toy programming problems, which I managed to do OK on.

One of them offered me a job off the back of this. The other one was a little more hesitant about hiring someone who basically couldn’t program, so asked me to write them a program to prove that I could. I went with the company that didn’t require me to demonstrate that I knew what I was doing, mostly because I didn’t.

So, well, at that point I was moving to London to become a software developer. Woo? I wasn’t really that interested in programming, and I rather hated London, but hey it was a job.

I wasn’t very good at it at first, unsurprisingly.

I started doing some basic front-end stuff. I was thoroughly overconfident in my abilities with HTML and CSS off the back of the small amount I’d learned already. I also got tasked with setting up a server, because I knew Linux so I must know what I was doing, right? I had no idea what I was doing and wasn’t very good at asking for help and after about 4 days of butchering it we got a server that… probably did the right thing? While this was going on I was learning to write Java (which the project I was working on was written in). It seemed pretty easy – the syntax was weirdly cumbersome, and it seemed a bit limited in comparison to ML, but it wasn’t hard to get things done in it. I doubt I really knew what I was doing, but this didn’t seem to matter terribly. (Fragment of conversation I remember from this time period: “Hey, can I use recursion in Java?” “Yes, but you probably shouldn’t”).

But I stumbled my way through into competence, and then necessity forced me into actually figuring out what was going on and starting to think about how to do things better. At some point my friend Dave joined the company I was working at and we bonded over talking about work and computer science while waiting for our 10 minute builds to run so we could observe the tiny change to the HTML we’d just made (sadly I’m not exaggerating that).

At some point the persistent nagging feeling that things could be done better than this mixed with talking about CS with Dave and I grudgingly started to realise that this programming thing was actually quite interesting.

Concurrently with this, some friends on IRC had persuaded me to learn Haskell. I forget what their reasoning was, but I mostly went along with it. I already knew ML, which is basically a simpler and less puritanical version of Haskell (the module system is better, but remember that my experience of ML was from copying and pasting stuff into the REPL. I didn’t even know what a module was), so learning Haskell was pretty straightforward.

At this point I had basically two bodies of knowledge that were failing to connect – I had these cool languages I did maths programming in and this Java thing which I used to get stuff done. I started hunting around in my spare time for a middle-ground, and after a false start with Nice ended up in Scala.

But work was at this point pretty frustrating. For a variety of reasons I wasn’t really enjoying it much, so I decided to look for a new job. I ended up at a company called Trampoline Systems, where I got to work with good people on interesting problems. The company eventually went a bit wrong and most of us left (it’s still around, albeit in a very different form, but only one person from when I was there is still around), but by then it was enough: I’d become quite interested in the practice of programming, and I’d discovered I could find interesting jobs doing it.

The rest is, as they say, history. Despite my long history of not getting around to learning to program, I’d become a software developer anyway. I discovered I was quite good at it and it was quite interesting, so I’ve stuck around since.

Would things have worked out better if I’d learned earlier? I don’t know. I feel like the practice of solo programming is very different from the practice of programming in a team. What helped me was that I’d already developed a lot of habits of thought and problem solving before I ever really brought them to bear on the practice of software development. I’m sure if I’d learned to program earlier it wouldn’t have hurt, but I think I’d probably have a very different perspective on things today, and it’s unclear if it would be a better one.

I think the thing to bear in mind is that software development isn’t really about programming. It’s about solving problems with computers. Programming is generally the tool you use to achieve that goal, and for some tasks you need to be better at it than others, but ultimately what you need is to be good at problem solving (not silly abstract logic problems, but actual problems that people have that computers can help solve). This is a skill you can develop doing almost anything, and if you’re already good at it then you can probably become a good software developer remarkably quickly even if you’re as disinterested in programming as I used to be.

This entry was posted in How do I computer?, life on by .

How to quickly become effective when joining a new company

The other day my colleague Richard asked me how I managed to get started at Lumi quite so quickly. It’s a good question. When I started here I was pretty much doing useful work from my second day (I did manage to get “code” out into production on my first day, but it was my entry on the team page). I think I handed them a feature on day 3. For added funsies I’d only been writing Python for a month at that point.

I’ve seen enough new starters on equally or less complicated systems that I’m aware that this is… atypical. How did I do it?

The smartarse answer to the question is “obviously because I’m way smarter than those people”. As well being insufferably arrogant this isn’t a very useful answer – it doesn’t help anyone else, and it doesn’t help me (knowing how I do something is useful for me too because it allows me to get better at it and apply it to other things).

So I’m putting some thought into the question. This is largely just based off introspection and attempting to reverse engineer my brain’s behaviour. I’ve no idea how portable this advice is or even if it’s remotely accurate, but hopefully some of it might be useful.

There are essentially three key principles I apply, all about information:

  1. Acquire as little information as possible
  2. Acquire information as efficiently as possible
  3. Use the information you acquire as effectively as possible

I’ll elaborate on each of these and how to do them.

Acquire as little information as possible

This one is the only one that might be counter-intuitive. The others… well, yes. You want to acquire information efficiently and use it effectively. That’s kinda what learning quickly means. But acquiring as little information as possible? What’s that about? Surely when trying to learn stuff you want to acquire as much information as possible, right?

Wrong.

This sort of thinking fundamentally misunderstand the purpose of learning, at least in this context.

Learning is only useful when it allows you to get things done. Filling your brain with information is all very well, but what if you’ve filled your brain with entirely the wrong information?

Work example (a hypothetical one. I’ve not seen this happen, though I have seen similar): Suppose you diligently sit down, trace through the database schema, learning how everything fits together. You spend about a week doing this. Then you get told that that’s the old schema and we’re 2/3rds of the way through migrating to an entirely new data storage system after which that schema will be retired. Now the knowledge of the history isn’t entirely useless, but there’s certainly more useful things you could have been doing with that week.

The goal of learning as little information as possible forces you to do three important things:

  1. It forces you to learn the right information. If you only acquire information when it’s absolutely necessary then that information was by definition useful: It enabled you to do the necessary thing.
  2. It forces you to do things you can actually achieve soon. If task A would require you to learn a lot and task B would require you to learn a little, do task B. It will take you less time to learn the prerequisites, so you’ll get something achieved quickly.
  3. It prevents you from getting distracted by the immensity of the task of trying to understand everything and forces a more useful learning discipline on you by giving you concrete places to start.

“But David!”, I hear you cry in horror so profound that it causes you to use exclamation marks as commas, “Are you suggesting that learning is bad? That I shouldn’t try to learn how things work?”

Rest your mind and set aside your fears, for I am not!

The thing about information is that once you have acquired it you can build on it. Every little bit of knowledge you put into your head provides you with a framework on which to hang more knowledge. The journey of a thousand miles begins with a single step. Other profound sounding metaphors that convince you of the wisdom of my advice!

We’re essentially iterating this procedure over and over again. “What can I do? Oh, there’s a thing. Lets do that. Done. What can I do now? Oh, hey, there’s another thing which I couldn’t previously have done but now can because I’ve learned stuff from the last thing I did”. By acquiring our knowledge in little bite sized chunks we’ve rapidly consumed an entire information meal.

As well as interleaving getting things done with learning things, I think the quality of education you get this way is better than what you would otherwise get because it contains practical experience. You don’t just learn how things work, you learn how to work with them, and you learn a lot of concrete details that might have been missing from the high level description of them.

Now go back and read this section again (if you haven’t already. Don’t get stuck in an infinite loop and forget to eat and die. Because I’d feel really bad about that and it’s not an efficient way to learn at all). It’s literally the most important bit of this post and I really want you to take it on board.

Acquire information as efficiently as possible

Here is my knowledge acquisition procedure. I bequeath it to you. Use it well:

  1. Can I figure out how this works on my own? (5 minutes)
  2. Can I figure out how this works with googling and reading blog posts? (15 minutes)
  3. Ask someone to explain it to me (however long it takes)

OK, I know I said the previous section was the most important thing in this essay, but this one is pretty important too. For the love of all, please do not sit there ineffectual and frustrated because you can’t figure out what’s going on. It’s nice that you don’t want to bother your colleagues, and the first 20 minutes or so of trying to figure things out on your own is important as a way of preventing you from doing that too much, but your colleagues need you to be useful. They also probably possess the exact information you need. It is their responsibility and to their benefit to help you out when you get stuck.

Use the information you acquire as effectively as possible

This is a fuzzy one. It’s basically “Be good at thinking”. If there were a fool proof set of advice on how to do this the world would look very different than it currently does. Still, here are some ideas you might find useful.

Here is roughly what my thinking procedure looks like. Or, more accurately, here is my inaccurate model of what my thinking procedure looks like that I currently use to make predictions but don’t really expect corresponds that exactly to reality.

Err. That paragraph might be a bit of a clue as to what I’m about to say.

If I may get all meta for a moment: What is the purpose of knowledge acquisition?

I mean, sure, you want to get stuff done, but how does knowledge help you do that?

To me the value of knowledge is essentially predictive. You want to be able to answer two questions:

  1. If I do X, what will happen?
  2. If I want Y to happen, what should I do?

These two questions are natural inverses of each-other. In principle a valid algorithm for question 2 is “Think of all possible values of X and see if they cause Y to happen”. Everything else is just an optimisation to the search algorithm, right?

First I’ll go into how to answer question 1:

At its basic it’s two words. “Mental models”

A mental model is essentially an algorithm you can run in your brain that lets you put in actions as inputs and get outcomes as outputs.

They can be implicit or explicit (note: This is my terminology and is probably non-standard): An implicit mental model is essentially one that you can run without going through all the steps. You don’t need to be able to describe the model explicitly, you just know the answer. Essentially what most people mean when they say something is intuitive is “I have an implicit mental model for how this works”. You’re essentially pretty comfortable working with anything you have an implicit mental model for and can probably get things done with it pretty readily.

Explicit mental models are more work. You have to think through the steps linearly and laboriously in order to get the answer. But they’ve got a lot of benefits too:

  1. They tend to be more accurate. I find implicit mental models are full of shortcuts and heuristics which makes them really efficient when they work but often leaves some major gaps.
  2. They’re easy to fix when they’re wrong. And mental models will be wrong, all the time. The chances of you always being able to perfectly predict things are basically zero. An explicit mental model you can just patch, but adjusting your intuitions is much harder.
  3. You can explain your reasoning to other people. This is especially useful when you’re stuck and want to talk a problem through with someone else.

So both types of mental models have costs and benefits. So which do we choose?

Answer: Both!

Once you think of mental models in terms of being able to predict how things behave instead of truth about how things work you can realise that having multiple mental models of how something works is not only fine but often useful: You can cross check them against eachother to see if they make different predictions, you can use whichever is cheapest when the decision doesn’t matter very much or most accurate when it does.

Additionally, it’s not really as cut and dried as I’m making it seem. What tends to happen is that explicit mental models turn into implicit ones. Once you’re familiar enough with an explicit mental model the ideas become intuitive and internalized and you need to consult it less and less.

Here are some tips for building mental models:

  • Reuse old mental models as much as possible. When learning python I started out as “Like Ruby with funny syntax”. Then I learned a bit more and replaced this with “Like Javascript with funny syntax and a decent module system”. Then I found out about descriptors and patched it as “Like Javascript with funny syntax and a decent module system, but bear in mind that the value you put into an object might not be the value you get out of it because of these magic methods”.
  • In general, build mental models as cheaply as possible. A mental model that you can build quickly is probably much more useful than a laboriously constructed one, because you’ll be likely to resist changing the latter.
  • Patch your mental models when they make wrong predictions
  • Test your mental models constantly. This is useful a) Because it leads to finding out when you need to patch them and b) Because it greatly hastens the process of turning them into implicit mental models, which makes you more able to readily apply them in future.

That’s about all I’ve got on how to build mental models. Now onto question 2: I can predict the consequences of my actions. How do I go about predicting the actions that will lead to the consequences I want?

Here is roughly what I think of as my algorithm:

Section 1:

  1. Generate an idea
  2. Is it obviously stupid? Throw it away and go back to 1.
  3. Is it still not obviously stupid after a little bit more thought? Set it aside
  4. How many not-obviously-stupid ideas do have I set aside? 3 or so? If so, proceed to the next section.
  5. How long have I taken over this? Too long? If so and I have any ideas set aside, go to the next section. Else, step away from the problem. Either come at it from another angle of attack or go do something else and let it simmer away in the background.

Section 2:

I now have a bunch of idea candidates. Put some proper thought into it and attempt to determine which amongst them is best. This may require actually trying some things out and gathering new information.

Once I have determined which of these is the best, decide if I’m satisfied with it. If I’m not, start the whole process again. If I am, this is the solution I’m going to go with for now.

So basically the idea is that you want to use cheap mental models to filter ideas quickly, then once you’ve got a few goodish ideas to compete against eachother you can properly explore how they work as solutions. I don’t know how accurate a description this is, but I think it’s broadly OK.

Unfortunately it’s got a big gap right at the beginning where I say “Generate an idea”. How do you do that? Just try things at random?

Well, you just… do. I don’t really know. This may be where I have to fall back on my “Maybe I’m just kinda smart?” answer, but if so then sadly I’m not actually smart enough to give a good answer here. Here are some things I think that I might be doing, but it’s mostly speculation:

  1. Try and narrow the search space as much as possible. If you think you might be able to solve the problem by making changes to a specific thing, only look at ideas which change that thing. If that’s proving not to work, try a different thing. You should be able to use a mix of past experience and proximity to guide you here – think of similar problems you’ve solved in the past and look near what the solutions to those were. If you can’t think of any similar problems or those solutions don’t pan out, look nearby to the problem. The advantage of looking at a smaller area is that there are fewer things to try so you’re more likely to hit on the right one quickly.
  2. Try and narrow the problem by breaking it up into smaller pieces. When you do this it becomes much easier to narrow the search space as in the previous section. In order to do this try to find natural “cleave points” where the problem breaks neatly along specific lines.
  3. When you decide a solution is a bad idea, see if there are any general reasons why it’s a bad idea and try to avoid areas that relate to those reasons.
  4. Conversely, if an idea almost works but not quite, try exploring around it to see if there are any variations on it that work better.
  5. If you’re totally stuck, try thinking of something completely out there. It probably won’t work, but exploring why it won’t work might tell you interesting things about the problem and give you other areas that worked.

I don’t think that’s sufficient. I’m afraid this might just be a case of practice makes less imperfect. I think part of why I’m “smart” is that my brain just won’t shut up (seriously. It’s annoying), so tends to gnaw at problems like a dog with a bone until they give in, which has given me a lot of practice at trying to figure things out. Really, this might be the best advice I can give you, but you’ve probably heard it so many times that it’s redundant: Practicing solving problems and figuring things out makes you better at it. Still, I think there should be things you can do above and beyond that – practice is no good if you’re practicing doing something badly. I’d love to have a firmer idea of what they were.

Fitting it all together

I like to think of this as a process of exploration. You start somewhere simple (I find “actually using the product” to be a good place to start). You look around you and, step by step, you explore around you while doing whatever jobs you can find in the areas you currently have access to. Your colleagues will help you find your way, but ultimately it’s up to you to draw your own map of the territory and figure out how all the different bits connect up. Think of this post as my personal and eclectic guide to software cartography.

Does it work? Beats me. Certainly I seem to be doing something right, and I really hope that this encapsulates some of it. I think it’s mostly good advice, but there’s definitely a bit of a danger that I’m cargo culting my own brain here.

If you try any of this, do let me know how it goes, and please do share your own ideas in comments, email, etc. because I’d really like to know how to get better at this too.

This entry was posted in life, Open sourcing my brain, rambling nonsense on by .

More on labels and identity

I’m heading to Nine Worlds later today, which means that instead of getting ready I’m procrastinating by thinking about anything other than the fact that I’m about to spend nearly three solid days surrounded by a very large number of near strangers and having to function socially in their presence.

This caused me to think about the you are not your labels post again and reminded me that I had promised a follow up to clarify my position on some things. I still don’t have a total follow up, but some of my thoughts on the subject crystallized while I was thinking about them in the shower (the shower is a wonderful place for composing thoughts), and I thought I’d write them down while they were in my head.

Identities are important

Apparently one of the vibes I gave off in that post was the standard “privileged person complains about identity politics because it’s so divisive“.

Allow me to state categorically: Fuck that. Identity is super important.

Often when something is described as “divisive” what is meant is “I hold this opinion. You hold that opinion. Therefore you are being divisive by holding a different opinion to me”.

Implicit in “Identity politics are so divisive” is “But aren’t we all the same really?”. It’s very easy to think we’re all the same if the dominant narrative of society is that everyone is like you (note: This is true even if you’re not actually in the majority. Witness the “male as default” thing despite the fact that men are a slight minority). Identity politics is only divisive if you hold the opinion that everyone should act like the default.

So, yeah. My point was in no way intended to diminish peoples’ identities. I apologize if it sounded that way. Identities are important and we should talk about them more, not less.

Labels are important

Labels are super useful.

As well as just the basic fact that language is basically our species’ hat – it’s most of how we actually get things done – labels serve a lot of useful functions.

When you’ve named something you’ve acknowledged that it’s a thing that happens reasonably commonly. When there’s a word for something you get to go “Hey! I’m not alone! There are other people like me!”. This can be really helpful – a lot of people think there is something wrong with them because they’re different from everyone else right up until the point where they discover that there are all sorts of other people like them (I don’t necessarily think it’s a good thing that we need other people like us to feel comfortable with our identity, but it’s a thing that happens whether I like it or not and I’m certainly not going to judge people for needing it).

It’s also useful in group formation. Having groups is useful – it gives you more force when fighting for equality because you can share a voice, it surrounds you with people who understand your problems, etc. It’s much easier to form a group around a concept when you have a label for that concept.

I personally don’t feel strongly about any of my labels, but that’s a personal choice which I wouldn’t especially encourage others to follow. It’s just how I work.

So, labels are great. I encourage you to use them freely and happily, should you desire to do so.

Labels are not identities

Hopefully I’ve now convinced you that the reason I think my point that you are not your labels is important is not because I think either labels or identities are unimportant.

In fact, the reason I think this matters is precisely because they both are so important.

The key thing is that they’re important in different ways. They’re highly connected, and both feed into each other, but they are distinct things which are important for distinct reasons. Sometimes the difference is subtle, sometimes it’s really not.

But what happens when you conflate them is that that difference is erased, and the way you treat each is distorted to match the way you treat the other. If you consider a label a part of your identity you may get very angry and judgmental about other peoples’ usage of it. If you consider your identity a part of your labels, you may get a form of impostor syndrome where you have a platonic ideal of what that label looks like and feel terrible about yourself for not matching that platonic ideal.

This is the key point I was trying to get at which I don’t seem to have adequately conveyed last time. It’s not that labels don’t matter or that identities don’t matter, it’s that the difference between the two does matter.

I may have more to say on the subject at some future date, but that’s all I’ve got for now. Hope it helped clarify my position.

This entry was posted in Feminism, life, Open sourcing my brain, rambling nonsense on by .

You are not your labels

I had a conversation with my friend Kat ages ago. It went something like this:

Me: People are weird about labels.

Kat: No, you’re weird about labels.

It’s a fair cop. When I react differently to something than 90% of people, it’s fair to say that I’m weird rather than they are.

This is a post about how I feel about labels, and how I think peoples’ interactions with them are unhealthy. I especially owe a discussion of this in the context of sexuality, but that’s mostly because that’s the context in which this came up and I owe a larger explanation of my opinion on the subject than I could fit in a sequence of 140 character soundbytes. It may take a little while to get to that part of the post, so be patient.

Before I proceed, I need to add the sort of disclaimer I usually do when writing about feminist topics:

I’m sitting here with a massive amount of privilege. I’m white, middle-class, cis, male, able-bodied, mostly neurotypical and a sufficiently close approximation to straight that I’ve probably just outed myself to a whole bunch of people I know by not just including “straight” here (I don’t think my parents read this blog but if they do, oops).

I think I’ve adjusted for that. I’m reasonably confident of what I’m going to say here, mostly because it’s a general principle rather than one that pertains to any specific axis of my privilege.

But while this perspective doesn’t make me wrong, what it does do is make it a whole hell of a lot easier to practice what I preach. I’m about to go on a long explanation about the effects of labels and how you shouldn’t get so attached to yours. It’s pretty easy to say labels aren’t important if most of the ones applied to you are ones you’re unlikely to ever be challenged on, and I pretty much fit the societal narrative of “this is what a normal person looks and acts like” (until I open my mouth and start ranting about some abstract philosophical point or telling people they should be picking things at random, but even that nicely pigeonholes into “geek”, which isn’t exactly a rarity these days).

So if you read this and go “Yeah, I get where you’re coming from, but my labels are really important to me because reasons, so they’re absolutely a fundamental part of my identity”, that’s cool. I totally get why they might be. I mean, I still think all the things I’m about to say hold true, but it’s pretty hard to go through life without some negative impacts and these are far from the worst. Besides which, I don’t know your situation and even if I did I don’t have any moral authority to tell you what to do. This is merely how I think the world works, and how I try to behave in response to it.

Second disclaimer I implicitly consider attached to all my blog posts but feel I should reiterate here: I’m totally not an expert on this. If I’m wrong, call me on it. Please.

OK. Disclaimer over.

Let me tell you how my thoughts on this subject started.

As a kid, I was diagnosed with Dyspraxia. I still have trouble understanding exactly what this is supposed to mean, and my experience of it as a kid doesn’t match all that well with the wiki article, but for me what this meant was:

  • I was really clumsy
  • There was about a 60 point difference in my IQ depending on whether I took the test orally or written (oral was higher).

Dyspraxia is apparently not something you get better from, but I seem to have taken a pretty good shot at it. I’m still pretty clumsy (though less so), but when I retested as a teen I’d basically closed the IQ gap. If you care, I think this is mostly because I have a really active internal monologue which I use as a coping strategy (pretty much all my writing I’m basically talking through in my head. I imagine that’s normal to a greater or lesser degree, so I’ve no real idea if this is just something you learn to do as an adult that I wasn’t very good at as a kid or what, but there you go).

The details of my dyspraxia aside, why is this relevant?

Because it gave kid-me a very nice inside view on how labels work.

As far as I was concerned, “dyspraxic” was not a thing I was. I mean I acknowledged the actual empirical details of it – I was definitely clumsy, and sure I was way better at some mental things than others, but wasn’t that normal? People are good at different stuff. You learn to be better at the bits you care about, you learn to do without the bits you don’t. That’s how it works, right?

To my parents and school though, this was a seriously big deal. David was no longer this weird little kid who was obviously super bright (not to mention ever so charming and modest) but wasn’t good at stuff, he was dyspraxic. It made sense now! Dyspraxia is totally a thing, and we can take these steps to help the dyspraxic kid.

Except… it’s not really a thing. What it is is a collection of loosely interacting phenomena and spectra which all seem to be more or less related. You’re not just binary dyspraxic or not, you express different variations of it, you express it to varying degrees, you express different bits of it to varying degrees. They’re are as many forms of dyspraxia as there are dyspraxics. Sure, we have a label, and we have a lot in common related to that label, but really it’s just a large corner of the weird and varied landscape of what people are like.

But despite the fact that it doesn’t refer to any one easily isolatable thing and despite the fact that I didn’t really feel any attachment to the label, it still proved very useful to the people around me.

Why?

Well because that’s one of the main things language is for.

When we use words, we’re not expressing some absolute nature of the universe. What we’re doing is conveying enough information to be useful.

Consider two colours. They’re both green. Are they the same colour? No. One is this green, the other is this green. When we cut up colours into words, we’re taking what is quite literally a spectrum and chopping it up into discrete chunks.

Why do we do this?

Well, there are two main reasons, and you can see them both in my dyspraxic example.

The first is communication. You don’t want to have to give your whole life story in order to have a basic interaction. Instead, you present a simplification of the truth and then drill down into the details if and when necessary. For example, I will often tell people I’m vegetarian when it’s context appropriate, despite the reality being way more complicated. Language is by its nature imprecise, and that’s what makes it work.

The second is prediction. It’s easy to learn simple rules – if I ask you if two colours go together and one of them is green and the other is purple, you’re probably going to say no regardless of which green and which purple I’ve chosen. It’s not an ironclad rule, but it’s pretty likely. Similarly, if I tell you I’m dyspraxic there are certain things that you can do to adjust my education to help me out (apparently. I didn’t find them very helpful as a kid, but I may just have been being a bit of a bratty kid).

So labels are seriously useful.

But here’s the key thing: Being useful doesn’t make them true. They are a way of looking at the world, not a feature of the world.

And sometimes that way of looking at the world breaks down and you have to fix it up.

Suppose you’ve currently got a very simplistic view of gender. There are men, and there women, and those are all the genders there are. You’re merrily carrying on your life safe in your worldview. Then someone comes along and they say “Excuse me, but what about me? I’m kinda a bit of both”. s’cool. You knew those words were just approximations to reality. As a good, responsible, human being you update your worldview and accept them. Another person comes along and tells you that they’re neither. No problem.

The problem comes when you start to take these labels too seriously. By their nature, approximations are for using when they work and discarding when they don’t.

Supposing I were to consider being a man a really integral part of my identity – I don’t just mean what I look like, or my body identity, but the whole baggage and social constructs around it and everything. I’m now very invested in this as a real thing – it’s part of who I am.

Now suppose a trans man comes along and tells me that he’s a man. Sure, he happens not to have a penis, but that doesn’t stop him being a man.

Where previously I could have just gone “Oh, cool. Sorry, my previous approximations to the world don’t work so well here. Let me update them”, now he’s a threat to my identity. I don’t identify as someone with a penis, I identify as A MAN, and I have bundled my penis in with a whole host of other ideas like liking beer and action movies. By claiming that you can be a man without having a penis, he has now eroded at something I perceive as an integral part of myself, and that makes me much less likely to be accepting of him. I’ve held too tightly to my view of the world, and he’s the one who got caught in the crossfire.

Obviously the above is mostly naive idealism. I don’t really think that that if everyone perfectly followed the advice in this post we’d all be wonderful and inclusive. Sure would be nice if it were true though. Also I don’t think that labels are the sole source of transphobia (there are plenty, and many of the others are a lot darker). This is more… how transphobia could arise amongst otherwise well intentioned people.

But I think a lot of biphobia actually does arise this way. Not all of it by any means, but I’d be astonished if it weren’t a large contributor.

We’ve two sides, gay and straight. Each has quite a lot invested in that label, and because they’ve formed lines along those labels they’ve got the whole baggage coming in along with it. While you can express aspects of the label more or less strongly (see “straight-acting”. Sigh), you’ve at the very least likely bundled “Is attracted to (gender)” in with “Is not attracted to (other gender)” in with your identity when you pulled in the label.

Then you have the bisexual (or pansexual if you prefer) people in the middle going “Hey, what about me? I like men and women. That’s cool, right?”

And unfortunately it’s really not cool. We’ve taken this whole complicated configuration of the world and boiled it down to “I’m straight” or “I’m gay”, and firmly associated our identities with those amorphous blobs of ideas, so when you come into the middle of it and go “Hey, I’m like you except for this thing you’ve very strongly identified as not being”, you’re now chipping away at our identity.

When you look at it this way it’s… understandable how a lot of this behaviour arrives. Not desirable, not excusable, forgivable given change perhaps, but certainly understandable. Imagine how you feel when your identity is threatened, when people deny your experiences. It’s really very unpleasant – it’s at best hurtful, and when done en masse it can be downright soul destroying.

When you do that to someone just by existing, it’s not surprising their reactions to you are a bit hostile.

The solution here is of course not that you should stop existing. Nor is it to deny your nature.

The solution is that people should stop being so weird about labels.

Keep using them by all means. They’re wonderfully useful things. We couldn’t function as a society without them.

Just… maybe think twice about letting them into your identity. Your labels are how you describe yourself, not who you are. Sometimes you’ll discover that those descriptions aren’t working out so well, or that they need to be far more inclusive than you thought they were. Try not to fight it. It’s how labels are supposed to work.

This entry was posted in Feminism, life, Open sourcing my brain, rambling nonsense on by .