Category Archives: Uncategorized

Being an example to others

Note: I realised I missed my old conversational style of writing, so I decided to resume it on occasion, including for this post. I will not be using it for heavier posts but I thought it would be nice to be able to code switch.


You know that thing you do where you hold yourself to standards which you would never dream of holding other people to?

(If you don’t know that thing, this post may be less useful for you, but it is a trait that is very common among people I know, so I’m confident that this post has an audience)

Anyway, that works much less well than you think it does, and you should probably consider walking it back a bit.

The basic problem with this is that people model their behaviour on those around them. If you are seen holding yourself to a standard, people around you will observe this and follow suit, even if you tell them not to, so by holding yourself to that standard you are implicitly holding other people to it, even if you don’t want to. This is especially true if you are prominent in a community, but it’s true for everyone.

So I suggest the following standard for good behaviour: Behaviour is good if not only if it is good in and of itself, but if it contributes to a culture of good behaviour1.

Behaviour that is good in and of itself but which creates a bad culture should be looked on with extreme caution.

What do you think of when you hear the phrase “I hold myself to standards that I wouldn’t hold anyone else to”? Does it sound like the speaker is being kind to themself, or does it sound like they are probably beating themself up over something that really they should just chill out a bit over?

In my experience it is very much the latter scenario, and if you find yourself doing that I would like to encourage you to try to stop holding yourself to that standard.

A particularly pernicious example of this is people not prioritising their own needs. Prioritising others’ needs over your own feels good and virtuous – you are sacrificing yourself for others, which many people think of as practically the definition of virtue.

The problem is that in doing so you are contributing to an environment in which nobody is prioritising their own needs. When you work yourself to exhaustion, you are not just working yourself to exhaustion you are teaching other people to do the same.

Conversely, behaviour that is neutral and/or mildly selfish on its own merits may in fact be very good if it creates a culture in which everyone feels like they have permission to do the same.

To continue the example of needs: by asserting and respecting your own needs you are giving everyone around you permission to do so. What would you want a friend who is looking exhausted and run down to do? You’d like them to take a break, right?

The problem is that if they take a break without feeling that it is OK to take a break, they will mostly just feel guilty about that. That might still be better than not taking a break, but it’s not a pleasant experience.

What this means is that if you want your friends to take a break, you need to create a culture in which taking a break is seen as OK. In order to do this, you need to take a break yourself!

I find this notion of permission very powerful as a route out of guilt over “selfish” behaviours: you want the best for others, so you want to give them permission to seek it out for themselves, but this requires a culture where that is acceptable, and that requires you to exemplify the behaviour you want to see in others, so by granting it to others you in turn must grant yourself permission to seek the best for yourself.

For many of us, empathy for others is easier than empathy for ourselves, but by looking at the problem through this lens of cultures of behaviour, extending empathy to others requires us to extend it to ourselves. You can think of this as a kind of reversal of the golden rule: Do for yourself as you wish others would do for themself.

Some examples where I regularly use this in practice:

  • I ask “stupid” questions – on the one hand I don’t want to be the person who is wasting everyone else’s time, on the other hand I do want everyone who is confused to be able to ask questions to resolve that confusion. By asking questions myself, everyone else also feels more able to do so.
  • When I am at a social event and everyone is having a good time but also I am very tired and want to go to bed, I say “Thank you for a lovely time, but I am very tired and want to go to bed. Good night, everyone”. At this point half a dozen other people go “Actually, me too” and also go to bed, because they’ve been waiting for permission to prioritise their own tiredness.
  • When something is making me uncomfortable, I say that I am made uncomfortable by it. I could try to tough it out, but I wouldn’t want others to tough it out, so by stating that I am uncomfortable everyone else who is also uncomfortable is more able to say the same, both now and in future.
  • When there is something I would like to happen, I tell people that, so that other people also feel able to ask for the things they like. (In truth, this is the one I find the hardest, but it’s important).

This is also a good place for positive use of privilege: Some of these (especially the asking stupid questions one) are much lower cost for me to do because I’m a moderately high status white guy.

I can’t promise this will magically fix all the guilt that you experience over being kind to yourself, but I’ve found it to be an excellent start.

Ideally of course you should be kind to yourself because you are a person and people deserve nice things, but in the mean time you should also be kind to yourself because the ones around you who you care about are also people, and you need to show them that people deserve nice things.

This entry was posted in Uncategorized on by .

Gendering

Epistemic Status: Somewhat speculative, mostly descriptive.


What is gender?

That is an excellent question, but it seems to be a very hard one to answer well. Instead I’m going to ignore it. This post should work for most “reasonable” notions of gender.

Instead this is a post about how categorising people into genders affects how we conceptualise them, and how this leads to the creation of gender norms that we then enforce.

I’ll mostly be focusing on binary genders (male and female) in this post, but I’m not making any assumption that those are the only ones, only that they feed heavily into how we reason about gender.

Gendering

Given a group, we tend to form inferences based on group membership. This is a perfectly reasonable thing to do – if someone is from France, we tend to assume they speak French. When someone votes for a particular party, we tend to assume they support many of that party’s polices (or at least reject other parties’ policies more).

Unfortunately what starts as a set of perfectly reasonable inferences often then plays out very badly in practice – reasonable inferences get exaggerated, and feed in to how we construct the social norms we enforce, often harming the people we stereotyped.

We do this in particular with genders. If a trait is particularly prominent in people we gender a particular way, we form stereotypes around it, and the trait itself becomes gendered.

For example, consider strength. It is simply true that men are typically stronger than women. That’s not to say that any given man is taller or stronger than any given woman (many men are short and/or weak), but looking at group averages the link between gender and height and strength is fairly clear.

We then reverse this stereotype. If it’s true that men are typically stronger, then it’s true that if someone is stronger then they are more likely to be a man. Thus strength becomes gendered – the trait becomes used as a marker of masculinity.

In and of itself this is a perfectly reasonable inference procedure. It’s literally true that if someone is stronger they are more likely to be a man. The problem is that we now erase the underlying data and simply treat strength as intrinsically manly, labelling strength as more masculine even once you have surpassed the typical strength of a man.

These social expectations then lead to enforcement. Men are shamed for being weak and women are shamed for having visible muscles because they look too manly. What started with a reasonable inference about differences between groups has turned into a social norm where everyone is forced to construct their gender to exaggerate the differences.

This enforcement in turn means that the group differences are larger than those we started with – if most people are expending effort to seem more masculine or feminine, the observed difference between them on that gendered trait will be larger than they would be in the absence of enforcement.

Thus we engage in a sort of “gender inflation”, where we take our initial notions of gender and expand them out into a kind of social halo around our original gender categorisation. This inflated gender manifests both in our social expectations and in the actual data we observe.

Small genderings become large

Because of this gender inflation, it is extremely normal to have gendering for traits which is more or less invented out of thin air, because a small gendering occurs which we then inflate it into a large one.

These small genderings can come up in all sorts of ways, but the easiest way is just chance. Culture is formed mostly out of memetic evolution (that is, people copy behaviours from others, and retain behaviours that in some sense work well), and as a result is highly contingent – often the reason why people behave in a particular way is the result of some random variation years back. There’s no intrinsic difference that leads to, say, the distinction between English and French, we just made different choices generations back which have been built on over time.

This contingency of culture can often lead to genderings because of some degree of homosociality – the tendency to prefer same-sex friendships (which sometimes may be strongly enforced by culture). The result is that there are opportunities for different contingent developments to occur between men and women, and that difference then becomes gendered, and gender inflation exaggerates those differences.

Genderings can also just be made up of course. There’s a long history of men theorising major gendered differences where none exist, and often that theorising is all that’s needed to create a runaway gender inflation where that difference becomes real solely because it is enforced. Because access to power is gendered, it is often easy to reshape gendering in ways that serve power.

What to do about it?

If gendering was purely descriptive and there were widespread acceptance that the posession of masculine or feminine traits didn’t necessarily imply much about other masculine or feminie traits, that would be one thing, but unfortunately it goes further than that in at least two ways:

  • People treat gender as predictive. If you have some gendered traits, you are expected to also have other gendered traits. This isn’t intrinsically incorrect, but leads to significant access problems where your gendered traits may open or close certain doors. I’ve e.g. written about this previously in the context of interviewing.
  • People enforce gendering. If you are perceived as a particular gender, you will be punished for not conforming to expectations of that gender. This actually doesn’t work well for anyone, because so many traits get gendered that even if we tick the right boxes on most of them it’s very unlikely we tick the right boxes on all of them. This is similar to some of the issues I talked about in On Not Quite Fitting.

As a result of these two factors, gendering tends to feed in to a lot of systems of control, where we reward people for gendering themselves “correctly” (by adhering to a consistent set of gendered traits) and punish them for mixed gendering.

Figuring out how to solve all of these issues is a rather big task, and I don’t propose to do that in this blog post.

I’ve previously described my utopian position as somewhat gender abolitionist. I no longer think that’s a good idea, because fundamenally regardless of whether we regard people as having genders, we will still regard many traits as gendered because of underlying biological differences, and I think most of the dynamics described above will continue to hold.

I think the current increasingly diverse range of non-binary genders we recognise is very helpful, both for letting people find the “points in gender space” that work for them, and allowing us to have a richer understanding of gender and gendered traits, but I don’t yet know what that richer understanding looks like.

My more modest short run suggestions are:

  • Gender inflation seems like a big deal, and I don’t think the extent of it is widely appreciated or understood. Be aware of its effects and try to damp them down rather than enforce it.
  • This feels like it shouldn’t need saying if you’ve read this far, but stop enforcing gendered traits. If someone exhibits a mix of masculine and feminine traits, that is a perfectly reasonable thing for them to do, regardless of whether that’s because they have a non-binary gender or are just breaking out of stereotypes within their binary gender.
  • In “Rewriting the Rules”, Meg-John Barker suggests that once you get to know someone as an individual you have much higher quality sources of information about their traits than relying on their gender as predictive. I strongly endorse this.
This entry was posted in Uncategorized on by .

Jiminy Cricket Must Die

Before I get on to the main point of this post, let me ask you a question: when reading a piece someone wrote, does it matter if there use of language conforms to the rules of grammar your used to, or is it acceptable if their writing an they’re own dialect?

If you’re like me that sentence made you uncomfortable1. There’s a kind of twitch, a feeling that something isn’t right, a desire to fix it. Right? It feels wrong.

If you’re a Python programmer, I’d encourage you to go look at should-DSL and stare at the examples for a while until you understand how they work to get a similar sense of wrongness.

In his book “Voices: The Educational Formation of Conscience” Thomas Green describes these as voices of conscience. He defines “conscience” as “reflexive judgment about things that matter”, and argues that these voices of conscience are not intrinsic, but learned as part of our moral development through our membership in communities of practice – English speakers, Python programmers, etc.

That is, nobody is born knowing how to write Python or speak English, but in the course of learning how to do so we also learn how to behave as English speakers and Python programmers. By our membership of these communities we learn their norms, and by those norms we acquire voices of conscience that tell us to follow them. Because we exist in many different contexts, we acquire many different voices of conscience. Often these may disagree with eachother.

Importantly, we can acquire these voices for the norms of a community even if we don’t adhere to those norms.

Green distinguishes between three different ways of experiencing a norm. You can comply with it, you can be obedient to it, and you can be observant of it. Compliance is when your behaviour matches the norm (which may e.g. be just because it is convenient to do so), obedience when you actively seek to follow the prescriptions of the norm, and observance is when you have internalised the norm and experience it as a voice of conscience.

To this list I would also add enforcement – whether you try to correct other people when they fail to comply with the norm.

It’s easy to construct examples that satisfy any one of these but not the others, but for example the sentence at the beginning is an example of non-compliance where I am still observant of the norm: I know the sentence is wrong, but I did the wrong thing anyway. Similarly, I am observant of the norm when I notice that other’s usage is wrong, even if I make no attempt to enforce it (which generally I don’t unless I’ve been asked to).

It is specifically observance (and to some extent enforcement) that I want to talk about, and why I think the voices metaphor breaks down.

Let me turn to a different source on ethics, Jonathan Haidt. In his book the righteous mind he presents Moral Foundations Theory, which proposes a set of “foundations” of ethics. I think Moral Foundations Theory is approximately as useful a set of distinctions as Hogwarts Houses2, but a lot of the underlying research is interesting.

The following is a story that he presents to people as part of an experiment in where morality comes from:

Julie and Mark, who are sister and brother, are traveling together in France. They are both on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie is already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy it, but they decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. So what do you think about this? Was it wrong for them to have sex?

Jonathan Haidt, The Righteous Mind, 2013, p45

If you read interview transcripts of people’s reaction to this story (which was deliberately constructed to provoke a disgust reaction), the common factors that emerge are that it feels wrong, and to the degree people can justify it they first of all struggle and then eventually do so on the basis of it being a norm violation rather than being able to point to any “objective” reason why it was wrong (partly because the story was specifically constructed to achieve that – the parties are consenting adults, there is no risk of pregnancy, no harm is done, the story is kept a secret so does not normalise something that might be harmful in general even if it’s not in the specific case, etc.). People make their judgements about morality based on intuitive, emotional, responses to the scenario and then try to reason their way to that conclusion.

It is useful here to have the distinction between a belief and an alief. A belief is something that you think to be true, while an alief is something that you feel to be true (they’re called a-liefs because they become b-liefs). Haidt’s research suggests that aliefs and not beliefs are the foundation of people’s moral “reasoning”.

This then is the source of my disagreement with Green’s metaphor of a voice of conscience: Conscience doesn’t start with a voice, it starts with a feeling. There is then a voice on top of that, allowing us to reason about and navigate the norm, but the feeling is what gives the voice its power. Without a felt sense of conscience, the voice is just knowledge that this is a norm that should be obeyed if there are not to be consequences. Once the consequences go away, so does obedience to the norm, but if you have learned the feeling of conscience then it will linger for a long time even if you leave the community where you learned it.

How do we acquire these norms (Green calls the process normation)? Bluntly, operant conditioning.

When we obey the norm, we are often rewarded. When we disobey it, we are often punished. Sometimes this is enforced by other people, sometimes this is enforced by reality, sometimes this is enforced by our own latent fears about standing out from the crowd (itself a feeling of conscience that we have acquired – standing out makes you a target).

The conditioning trains our habits, and our feelings, to comply with the norm because we learn at an intuitive level to behave in a way that results in good outcomes – behaviours that work well are repeated, behaviours that work badly are not, and we learn the intuitive sense of rightness that comes with “good” behaviour from it.

So what does conscience feel like? Conscience feels like following the path in the world that has been carved for you via your training. When you stick to the path, it feels right and good, and as you stray from it the world starts to feel unsettling, and even if you no longer fear being punished for it you have learned to punish yourself for it.

This may sound like a very cynical framing of it, so let me just reassure you that I am not about to start talking about “slave morality” and how we should throw off the oppressive shackles of society’s unreasonable demands that we should be nice to people.

But there is an important point in this cynicism: The process of conscience formation, and the feelings that result, are morally neutral.

The ways that we learn the rules of grammar are the same as the ways in which we learn that harming people, are the same ways that people learn that, say, homosexuality is wrong. We might learn these through our memberships of different communities, and we certainly learn them with different strengths, but we acquire them through broadly the same mechanisms and acquire broadly similar senses of rightness and wrongness through them.

Over time, we are part of and pass through many communities, and accrue senses of conscience from them. Because of the shared felt sense of conscience, we cannot necessarily distinguish between them, and we end up with an underlying muddy sense of rightness and wrongness with no clear boundaries or sense where particular parts come from. Some things feel like the sort of thing you’re supposed to do and some things don’t.

Much of this underlying sense of conscience is bad and needs to be destroyed.

Because the formation of conscience is a morally neutral process, the consciences that we form may be bad.

How often does this happen? Well, consider this:

  1. We learn our consciences from the groups in which we are embedded.
  2. We are all a part of society.
  3. Society is really fucked up.

As we go through life, we pass through different spaces, and learn their norms, and then when we leave we drag the consciences that we learned there along with us. Many of these spaces are broken, and we learn bad norms from them that we have to break out of if we want to grow. e.g. “men shouldn’t express their feelings” or “I’m not allowed to set boundaries”.

As we grow more morally sophisticated (which is not necessarily the same as more moral) we come to understand that there is a distinction between “feels wrong” and “is wrong”, and that just because we react to something with visceral disgust doesn’t mean we should necessarily consider it immoral.

As a result we separate ourselves from the feeling of conscience and privilege the voice of conscience. If we can’t explain why something is bad, we say things like “Well I wouldn’t do it but obviously that’s fine if you want to”. Sometimes through gritted teeth, sometimes while genuinely managing to convey the impression that you should do you.

At this point we pat ourselves on the collective backs for having become enlightened and moved past those ugly primitive urges and archaic social constructs. We’ve moved from listening to the feeling of conscience to the voice of consicence, and because voices are the tool of rational analysis we think we have moved to a higher level of moral understanding.

We haven’t. This was the wrong thing to do. It is at best a transitional step, but more commonly it is a dead end.

The problem with this strategy is that it conflates enforcement with observance, and aliefs with beliefs. We think that because we have stopped enforcing the norm we have stopped observing it3, and we think that because we no longer believe something is immoral we no longer alieve it.

It is important to bring our moral beliefs and aliefs into alignment, but the way to do that is not to suppress our feelings on the subject. Suppressing feelings doesn’t make them go away, it buries them and increases the associated trauma. Disassociating from ourself isn’t the path to becoming a more moral person, it just causes us to build up trauma in the areas that we’re ignoring.

If we want to continue our moral development past a certain point, we need to learn the skill of navigating our conscience, and that means getting to a point where we can put aside the voice of conscience and look deeper as to where the feeling comes from.

Instead of flat out declaring that our moral intuitions are wrong and we shouldn’t feel that way, if something feels wrong we need to be able to approach that feeling and ask questions of it. The most important questions being where, why, and from whom did I learn that this was wrong?

This doesn’t make the feelings go away. Making the feelings go away is not the goal, at least initially. The goal is to get a handle on them, and to get to the point where you can make progress. Rather than being in constant conflict between voice and feeling, you can start to navigate between the two, and hopefully eventually come to a healthier position where your moral aliefs and beliefs line up.

This is extremely hard. I wish I had nice, easy, shortcuts to offer on how to do that, but I don’t. I’m still figuring all of this out myself.

I think the basic starting point for solving this is getting better at engaging with our feelings at all. I’m still processing how best to do that, and I owe you one blog post about this, but for now I’ll hand you over to this Twitter thread I wrote yesterday.

Once we’ve done that, the next step is to do the work. Ask ourselves questions, learn why we feel a particular way. If we still think the feeling is valid afterwards, that’s great, we’ve learned more about our moral intuitions. If, on reflection, we decide that actually this moral intuition comes from a bad place, we can start to examine it and, from a place of safety, start to unpick some of the moral damage that was done to us.

This entry was posted in Uncategorized on by .

The missing social technology sector

There’s a thing that I’ve been puzzled about for a while. There’s an entire sector of industry that, as far as I can tell, does not exist and should. For want of a better term I’m going to call the thing that sector would do social technology1

It’s entirely possible that the answer is that this sector does exist but I’m not in the normal target market. If you’d like to come along and shout “You idiot tech bro. X, you’ve invented X.” then honestly this time I’d be delighted because I legitimately have tried to solve for X and failed.

What is social technology?

Social technology is “technology” built out of groups of people following rules, maybe with assistance from simple props, rather than software, machinery, etc2.

What does that look like? Well, let me give you a free sample. Here’s how to have a great high level conversation with someone (a friend, coworker, etc).

  1. Find yourself a five minute sand timer (you can use a phone timer if you like, but I find the soft limit of having it be visual is helpful rather than a loud beep).
  2. Place the timer in front of one of you. That person picks a topic they’d like to talk about. Maybe write down a sentence about what it is if you like.
  3. Now set the timer going and talk about that topic, trying to stay more or less on focus.
  4. When the timer runs out, the person who it is not in front of can either turn it over and leave it where it is to continue discussing the topic, or claim it. They now get to set the topic as above. Often what they will say is “I’d like to focus on X that we were talking about” or “What you said about Y reminded me about Z that I’d like to discuss” but that’s not required.

This is from a discussion system I’m vaguely tinkering with called TickTalk. I’ve only run this two player version once, but it seems to work very well, and results in great conversations of a particular sort. Certainly the multiplayer version has been a great success.

What makes something like TickTalk a technology?

It’s a technology because it has to be invented and developed like a technology, and it has results like a technology: Someone had to come up with the idea, the idea and practice had to be refined through use, and it provides us with capabilities greater than what we had prior to its invention.

Sure, TickTalk is just a system of rules, it doesn’t feel like a thing per se, but software is also just a system of rules, and we consider that technology to the point that “tech” is almost treated as synonymous with software these days (although it shouldn’t be).

What’s interesting about social technology?

Social technology has several key interesting features that I think distinguishes it from most other technologies.

Firstly, it is built on extremely powerful components. Humans are incredibly flexible compared to most things we build technology out of. They interpret, they adapt, they fill in the blank spots. If you study the literature around complex systems, essentially every complex system that works does so because of humans running around behind the scenes papering over the bits where it doesn’t work and filling in the details. If you imagine programming with a “do what I mean” construct, social technology development isn’t far off. Social technology lets us jump over all of the things we don’t know how to do with computers but do know how to do with people. We could have been building artificial superintelligences thousands of years ago, but have still mostly failed to do so.

Secondly, social technology as I have defined it is incredibly cheap. Although some of it (e.g. therapy) admits and requires a lot of deep expertise, many interesting social technologies can be written down on a page, understood with one run through, and implemented based on everyday household items.

Thirdly, social technology is incredibly adaptable. Because humans are flexible and social technology is cheap, it’s essentially free to take pieces of social technology apart and put them together again in new configurations.

Finally, I think social technology is interesting because we can use it to solve real and pervasive problems with our lives. The fact that it’s cheap and adaptable means that we can start to deploy it everywhere almost immediately by looking at areas of our lives that could use improving and finding a way to slot it in. TickTalk has been immeasurably helpful in improving my life over the last year because of some of the conversations it has afforded, and I think there are many more opportunities like that available to people and currently not being taken.

Why do we need a social technology sector?

Social technology exists today. It’s certainly not a thing I’ve invented. What I find puzzling is that it does not seem to exist as any sort of unified field, and does not seem to have a substantial amount of industry associated with developing it.

There are a number of areas which are essentially social technology subfields. For example:

  1. Liberating Structures are a type of social technology for management and business.
  2. Social Choice Theory is a weird thing in that it is an elaborate theoretical study of how social technology for decision making might work but largely skips out on actually doing the work to go from science to technology (e.g. there’s no user experience work). Voting Systems are very much a social technology.
  3. Pen and Paper Roleplaying Games are an entire thriving social technology games sector that is largely disconnected from any broader practice. Maybe this will change with the rise of the professional dungeon master, who knows?
  4. Many methodologies (agile, lean, etc.) are arguably a type of social technology3.
  5. Bureaucracies are enterprise social technology deployments.
  6. A lot of social sciences can be kinda regarded as studying the social technology we implicitly build.
  7. Therapy is a social technology (or, really, a large family of social technologies).

So there’s certainly no absence of social technology in the wild, but it mostly seems to be very ad hoc and without any sort of unifying work or principle. You’re not going to find much if any unifying work between social choice theory, therapy, and liberating structures.

This matters because it means we’re not taking the active development of social technology seriously. This shows in a couple of ways:

  1. There is very little cross-pollination between these sectors because nobody realises they’re doing the same things. RPG developers know a lot about how to manage unruly players sitting around a table, but this knowledge rarely makes it into meeting design.
  2. There is a huge amount of low hanging fruit available to be picked.
  3. Where we develop social technology we do not do it with the understanding that it is a technology (with the partial exception of the games sector).

In particular because we do not deliberately design social technology, most social technology is very poorly designed.

Have you noticed how many startups pivot? One major reason for that is that it’s really hard to build the right thing. When building new technology you will run into at least one and usually several of the following:

  • You misunderstood what the users wanted
  • You misunderstood what the users needed
  • You misunderstood what the users were willing to pay for
  • You failed to predict how people would use your technology
  • The solution you imagined doesn’t actually work
  • You ran into a hard problem that you couldn’t have anticipated

And many more. If you just build something and hand it to users, the results will rarely go very well. To build something that works you need to prototype, playtest, and iterate. Anything else is doomed to failure or, worse, mediocrity.

Almost none of our widely deployed social technology is designed in this way, and it shows. The best we can hope for is that because it’s easy to adapt and copied widely it’s subject to a certain amount of evolution over time, so the worst of the rough edges are knocked off and some of the best ideas are preserved, but often what happens instead is that we take something that doesn’t work very well and lock it in as a cultural tradition, with all its flaws intact.

Take my example of TickTalk. TickTalk comes from me taking Lean Coffee and playtesting it a bit, deciding it wasn’t very good4, fixing the bits I didn’t like and then playtesting it some more until we’d refined it down to a system that worked well.

Another thing that the lack of deliberate design means is that we often do not make good theory informed choices of social structures. Take for example my suggestions in Democracy for Lunch that you should use random ballot or approval voting for most group meal decisions. This is an example of providing a piece of social technology (a voting system) for a commonly encountered problem. This happens fairly rarely. To the degree that most people use social technology to decide on lunch options, it tends to be a mix of dictatorial (suboptimal: Does not involve whole group), plurality (the worst voting system) and consensus forming (comparatively high effort for a comparatively low importance option).

Because we don’t take the problems of social technology seriously, we end up in a situation where we often lack basic social technologies, or the ones we have and use are not very good.

Having an actual social technology sector would help fix this by making development of social technologies deliberate. The key benefits of this are:

  1. It turns the creation of social technology a problem that we can work at getting better at rather than something happens by accident.
  2. It would provide resources that are lacking for intensive play testing of new social technologies, providing us with more refined systems.

Why don’t we have a social technology sector?

Generally the key reason any given thing doesn’t exist is just that nobody has made it yet, but I think there are a couple of major factors that mean that it’s a bit tricky to develop such a sector.

The first is that it falls into an annoying middle ground area where it’s hard to make money out of, because once developed social technologies are cheap to copy and hard to prevent copying, and hard to do academic research on, because discovering universal truths about human nature is not an easy problem. As a result, it’s hard for there to be a sector per se, because industry and academia will both struggle to create it.

One way to make money out of it is basically management consultancy and writing books. The problem with doing this is that for the most part it doesn’t seem to matter if the social technology you’re developing actually works, because people will turn it into an ideology and make money off it either way. As a result you’re better off investing money into marketing than technology development even if you work with social technology5.

Academic research on the other hand struggles because running experiments on people is hard. A good example of this is the research on brain storming vs brain writing (the difference is basically how much you use speaking vs writing). You can come to some tentative conclusions, but really it should have just been a bunch of play testing sessions. These wouldn’t come to Truth in the same way, but that’s ok – we’re not really interested in truth, we’re interested in constructing technology that more or less works.

The second is that I think people are resistant to the very idea. Social technology feels like something that we shouldn’t need, and a bit non-serious. All getting together to obey a system of rules in aid of some goal feels a bit game like to people, aren’t we past that?

The opposite of that is that people take their social technologies that they do adopt very seriously indeed. People who pick up social technologies and turn them into religions, investing thoroughly in every aspect of the technology. Listen to someone telling you about Kanban, or Lean, or Democracy (which always means the specific version of Democracy in their home country of course). They often struggle to conceive of it being legitimate to do things any other way, which impairs the development of these technologies.

How do we get to a social technology sector?

If you gave me a couple million pounds to bootstrap it, I’d put together a cross disciplinary team of a couple people (say, a management consultant, a role playing game designer, a couple of academics) and start trying to build a business going into companies and developing social technologies with them to try to streamline their workflows. This is not particularly my current plan, but hey if you have a couple million pounds you want to send me way I wouldn’t say no.

The more accessible route and the one that I hope we adopt is to be the social technology sector you want to see in the world.

Social technology is cheap, adaptable, and easy to copy. This makes it extremely amenable to grass roots experimentation and open source development.

Step one of this is to let go of the idea of it being weird. We can start to be more deliberate about the social structures in our lives, and from there we can start to experiment with them, and when we do, we can write about them and share them with the world. Other people can then take our technologies and experiment and build their own.

If you’re wondering where to get started, my recommendation would be to find a couple of interested friends and start a structured discussion with them. Either TickTalk or Conversation Cafe are good starting formats.

You could even go meta while doing that – start thinking about other areas in your life where you could take more a deliberate, explicit, approach and ask what technologies might enable that.

This entry was posted in Uncategorized on by .

The Ethics of False Negatives in Interviewing

When interviewing people, there is one significant ethical component to the decision making process that is rarely made explicit, and that is often done badly as a result. I’ve written about it previously (here, here, and here), but you don’t need to read those – this is a standalone piece that sums up my current thinking.

The problem is this: The natural shape of the problem of interviewing encourages you to hire people who are similar to yourself. As a result, the aggregate effect of seemingly innocuous and effective decision making processes is often to amplify structural inequalities in your industry.

It will take a while to unpack how this plays out, so bear with me.

I’ll start with a simplification and only consider the case where you’re hiring one person for one role. Many real interview processes don’t play out this way – you might hire multiple people, there might be some flexibility about roles, etc. The dynamic is clearer in the one person to one role case, so I’ll stick to that, but most of the same considerations apply in general even if the details change.

When you are running such an interview process, your goal is to sift through a lot of candidates and hire one who would be good for the role (The question of what “good in that role” means is very complicated and could easily take up an entire post on its own. I intend to ignore it entirely, as others are much better placed to talk about it than I am). Effectively, you see a series of people, and make a yes/no decision about hiring each of them, stopping when you’ve hired enough people.

When making this yes or no decision to hire someone, there are two ways it can go wrong: You can say yes when you should have said no (a bad hire, or false positive) or you can say no when you should have said yes (a bad rejection, or false negative).

The most important thing to understand about interviewing is that you are extremely concerned with the false positive rate, and only somewhat interested in the false negative rate. That is, you want to ensure that the people you hire would be good in that role, and you are less concerned with the question of whether the people you reject would have been bad in that role.

Why? Well, because the cost of a bad hire is almost always much higher than the cost of keeping on trying unless your standards are incredibly stringent. You pay an opportunity cost, but as long as there are a lot of candidates, the opportunity cost of rejecting any one good candidate is low. All you are really paying is the cost of interviewing, so as long as the base rate of good candidates is tolerably high and your false negative rate isn’t too high, it’s mostly OK.

Why does it work out this way? It’s because the cost of a false positive is high and visible, while the cost of a false negative is low and invisible.

Because you have no further contact with most rejected candidates, a false negative and a true negative are functionally indistinguishable – if you could tell they were a false negative they wouldn’t be a false negative! As a result, if you have a high false negative rate it won’t look like a high false negative rate, it will look like you’re getting a lot of bad candidates and the cost of interviewing is correspondingly high.

In contrast, if you hire a candidate, you now have a lot more information about them. You will find this out over the coming months and years, and will eventually become reasonably certain as to whether your hire was good or not. This means that false positives will almost always eventually become visible to you, but they will do so at great cost! They’ll have spent a significant time as dead weight (or active toxicity) in your organisation, and this opportunity cost is large: You were hiring because you needed someone to help you out, and you’ve now spent months or years not getting that benefit.

As a result, every false positive is both extremely costly and eventually discovered. This means that you have both a strong incentive to keep it low, and good feedback that allows you to do so.

As a result you can roughly think of “rational” interview design as about minimizing the false positive rate while keeping the cost of hiring to some reasonable level.

Before I go on I want to emphasise that this is very reasonable behaviour. Asking a company to ignore or substantially raise its false positive rate is not going to go down well, and is more likely to result in a kind of theater of signalling where they find more complicated or worse ways to get the same benefit.

However, it’s worth thinking a bit more about the structure of this process, and to try to shift it a bit, by adding new constraints and incentives.

Why? Lets turn to a fact that we have been ignoring so far: Candidates are people, with their own needs and motivations.

A candidate is similarly paying a cost for interviewing, but their interests in the error rates are reversed. The cost of false negative rate is almost entirely paid by the candidates, because it means that they are denied something they wanted (and deserved), and there is a high emotional impact to rejection. With interviewing, they also have a significantly higher opportunity cost, because there are fewer companies they can reasonably apply to than you have candidates applying (this is less true in other examples of this dynamic).

As a result, false negatives are not nearly as free as they looked. Instead they are what economists call an externality – something that looks free because someone else, in this case the candidate, is paying for it.

How much should this matter to you as an interviewer? Well, some, certainly. If you want to behave ethically as an interviewer and as a company you do need to at least consider the harm to the candidate, but anecdotally it seems to be a thing that people are vaguely aware of and consider to be an acceptable cost – after all, most interviewers have been interviewees, so they have some idea of what it’s like to be on the receiving end. So for most companies there is already at least a certain amount of respect for the candidate’s time (large, self-important companies, with long multiday interview processes notwithstanding). It’s a thing that could be considered more, certainly, but it’s not a huge moral crisis.

Unfortunately “respect for the candidate’s time” does not fully capture the cost of false negatives, because not all false negatives are created equal.

We now need to unpack another thing that we’ve been ignoring so far: Interviewing is not a magic black box that spits out an answer of good or bad, it’s a reasoning process based on interacting with the candidate.

In an ideal world you would have a perfect simulation of what it would be like to work with the candidate, and hire them if that simulation was positive, but in the actual interviewing process where you have a small amount of time you basically just ask them some questions, get them to perform a task, and make your best judgement based on the evidence.

Again, this is fine, there’s literally nothing else you can do, so you should do that, but you shouldn’t do it uncritically, and it is worth thinking in more detail about the specifics of the reasoning process you have.

The core problem is that other interviewers are likely to reason in similar ways to you. This means that individual candidates may experience a much higher false negative rate than average.

Take, for example, someone who finds interviews very stressful and thus underperforms in them compared to their actual job ability. They will experience a significantly higher false negative rate than average, and experience a correspondingly higher cost to interviewing1.

When the variation is low, it’s tempting to not worry about this that much – so what if a candidate has to interview at twice as many places to get a job? That’s not your problem, and it doesn’t seem to be that big a deal.

Unfortunately there’s no reason to expect the variation to be low, and if some people find they are rejected vastly more often than average, those people are at a significant disadvantage in life.

When you participate in a system which significantly disadvantages people like this, you need to think long and hard about whether you are doing them an injustice. I think in this case we clearly are.

In the worst case you could imagine people who will never get hired even when they would be very good in the job. These people will experience a 100% false negative rate no matter how good your false negative rate is on average.

How might such groups arise? Well, racism for starters. In a society with pervasive and overt racism (e.g. apartheid South Africa, the USA during segregation) you might be entirely excluded from jobs simply because of your race. In a society where people pay lip service to not being racist, the numbers will look less extreme, but as long as there is significant prejudice against a group that group will find it harder to get hired.

There the corresponding ethical advice seems easy, right? Don’t be racist. Job done. Unfortunately, it’s not nearly so simple as that.

The problem is that you can get broadly similar effects without any active prejudice against your part. Suppose you had a test that gave you 100% accuracy on 80% of the population – that is, the test says you should hire the person if and only if you should actually hire the person – the remaining 20% of the population always failed the test. If you ignore the population effect, hiring based on this test looks very good. It has no false positives, a fairly low false negative rate, and permanently marginalizes a fifth of the population.

How plausible is it that such a test exists?

Well, that’s an extreme case, but when hiring for software developers I think a reasonable case can be made that looking for open source contributions is a decent approximation to it. It’s certainly not the case that all open source developers are good hires, but looking at someone’s open source code is a pretty good signal of whether they’re good at software development. However, this means that people who don’t contribute to open source get left out. If you use open source as a necessary signal, you’ll exclude a whole bunch of people, but even if you use it as a sufficient signal, it gives people who don’t contribute to open source a significant handicap, and open source contributions very disproportionately come from well off white men, so there’s a strong structural bias there.

This should be the point where I tell you the right thing to do here, but I honestly don’t know. I’d certainly say you shouldn’t require open source contributions, but I don’t actually think that it’s reasonable to say you should ignore them. Primarily because even if I told you that you’d ignore the advice, but also because that in itself would exclude a bunch of people who e.g. have no formal education and want to be able to use their open source contributions as signs of competence.

Even if you manage to rid your interview process of tests like this that are structurally prejudiced against a group, it will be hard to entirely remove biases. The problem is that fundamentally selecting for minimizing false positives gives an intrinsic advantage to people who you understand how to interview. These will disproportionately be people who are similar to you.

Suppose you are interviewing someone, and you want to get a sense of whether you’d like working with them. In order to do this you need to have a conversation. If they are someone who you share a lot of culture with, this is easy – you have things to talk about, you share a language (both in the sense that your first languages may be the same, but also just in the sense of shared cultural references, jokes, etc). It’s easy to talk to them.

In contrast, someone from a very different culture will take more work – you need to establish common ground and figure out a mode of conversation that works well for you. If you were working with this person you would get a long period of time to do that, but in an interview you have an extremely short time, and as a result the conversation will often flow less naturally than it might. As a result you are less able to tell whether you will actually get along well with this person when working with them, and this again gives the people who are similar to you an advantage.

This pattern repeats itself over and over again: If you are in familiar territory, you know how to predict accurately in that territory, so when you come to try to reduce false positives you will automatically select for the familiar – the unfamiliar is uncertain, and so your false positive rate goes up.

How do you get familiar with what working with someone from a particular group is like? Well, you work with them. Which you’re less likely to do if there’s some significant structural prejudice against them in the interview process. In this way, structural prejudices tend to reinforce themselves over time – we hire people who are like us, and this increasingly refines our models of which people are good to work with into ones that are based on that familiar set of people.

None of this is to deny that there aren’t myriad other structural prejudices in interviewing, this one is just important to highlight because of how insidious it is: It doesn’t operate through any negative belief about the group being disadvantaged, it acts solely through uncertainty and a reasonable set of of behaviour following incentives, and so even people who are genuinely committed to doing better can fall victim to it without noticing.

Naturally at this point I should tell you about the solution. Unfortunately, that’s mostly a hard pass from me, sorry. The general shape of the solution is “get better at interviewing”, and I’m not actually good enough at conducting an interview process to really offer advice on how to do so.

So, to the degree I have advice, it’s this: Most interview processes I’ve been a part of (on either side of the table) have been quite slapdash, and that’s only going to exacerbate this problem. Given the impact of the natural pressures of interviewing, this has to change. Interviewing is a hard problem, with huge social impacts, and it deserves to be treated as such.

As a result, I would like people to think much harder about designing their interview processes, and do some reading and learn how to do it properly.

Most of this has to be fixed at the company level, but if you have any good resources on e.g. reading, courses, etc. that people can take, please feel free to leave comments on the blog or let me know through some other medium.

This entry was posted in Uncategorized on by .