Advice to new speakers

Public speaking is a great skill, and one that I strongly recommend people develop. The ability to get up in front of a crowd and speak to them is surprisingly useful in its own right, and besides teaches you a lot about communication and confidence.

Public speaking is also terrifying for new speakers, and as a result they often try to play it safe. This is entirely reasonable – when you are nervous, making your environment safer is the right thing to do. Unfortunately, new speakers are often wrong about what makes a talk safer.

Here is what you as a new speaker should do to actually make your talk safe:

  1. Under no circumstances read your talk from a written version of it. This includes reading your slides.
  2. Pick as short a time slot as you can find. Lightning talks are ideal. Twenty minute slots are fine. Under no circumstances give your first talk in an hour long slot.
  3. Practice talking about that topic repeatedly, refining your talk and slides until you are 100% confident that you can fit it in the time slot.
  4. Wear a watch on stage so you can keep track of time.
  5. Pick a niche topic that you find very interesting and that it’s unlikely many people know much about.
  6. Don’t take questions.

In contrast many new speakers pick an easy topic, read from their notes, and overrun their time slots. These are about the only things you can do as a new speaker to lose audience sympathy1. They can tell if your speaking skills aren’t polished, but they won’t care much – everyone either has been a new speaker or hasn’t yet given a talk and is very impressed that you are. If you keep them interested and don’t inconvenience them, the talk is a success.

In my post on satisficing I listed the following as the success criteria for giving a talk:

  1. It has to make sense.
  2. It has to fit its time slot.
  3. It has to convey something interesting.

I should add a fourth one for new speakers: It shouldn’t stress you out too much to give it. The above advice is designed to make it as easy as possible for you to satisfy all of these requirements.

I’ll now elaborate a bit more on how it does this.

No Reading Your Talk

The reason you should not read from a prepared version is that it will make your talk extremely hard to follow. It’s not impossible to give a good reading, but it is a legitimately very hard task, and you almost certainly don’t have the skill set to do it.

Doing this well requires two skills:

  1. Having a natural voice while reading out loud.
  2. Writing speech that sounds natural.

Most people are not very good at either of these.

Reading out a prepared speech will usually cause you to speak in a flat monotone, with minimal variation in pitch and speed, no pauses, and extremely restricted body language. This is almost impossible for a listener to understand, because all of those things are major cues that we use to understood spoken words.

Avoiding this is harder still if your prewritten talk is not actually well written for being spoken, which it probably isn’t. Writing is more measured, and tends to be more formal and stilted when read out loud. Most writing would be improved by making it suitable to be read out loud, but you don’t want to make your public speaking dependent on improving your writing skills.

In contrast, if you design your talk to be spoken from the start, you will avoid both of these pitfalls automatically: You already know how to speak, and you can lean hard on that and refine it further.

This doesn’t mean you speak without planning – rehearsing your talk is very important – but it does mean that your talk will be partly spontaneous and will not be the same each time you give it. That’s fine, that’s how talks work.

You can of course prepare cue cards if you’re worried about losing track of your talk, and you can read out anything you need to quote. I don’t personally do this, but if it’s something you find helpful then it’s a very different proposition to reading out your whole talk.

You can also write out a version of your talk if you would find it helpful for getting your thoughts together and planning it out. I do this occasionally. The important thing about doing this though is that you write it out and then don’t use it – the process of writing it is what was important. You are not giving the talk by reading it, or even attempting to memorise the wording you used in it.

Practicing Time Management

Time management for talks is a hard skill and I would like to give you an easy way out of it as a new speaker, but I can’t.

Unfortunately, it’s also a vital skill set: You are obliged to fit your time slot, and you will make things worse for everyone if you do not. As long as you fit in your time slot, everything else is forgiveable, but overrunning at best inconveniences everyone and at worst significantly disrupts the schedule, and you will make the organisers mad at you.

As you get more experienced as a speaker you will (if you pay attention) get better at judging time of talks and it will become more automatic, but in the beginning there is no substitute for practicing your talk.

The specific advice I gave for time management was:

  1. Pick a short time slot.
  2. Practice your talk until you are 100% confident that it fits the time slot.
  3. Wear a watch on stage so you can keep track of time.

All of this is designed to make your life as easy as possible while getting good at this vital skill.

Short time slots are much easier to get right, because it’s easy to pad or cut a minute or so, but if your timing is off you will tend to drift in a systematic way over the course of your talk. If you’re 5% off on a 20 minute talk then you have a minute to correct for. If you’re 5% off on an hour long talk, you have three minutes to correct for. This may not sound like much, but rushing through the conclusion of your talk in the last thirty seconds when you thought you had three minutes left is incredibly stressful, and the conclusion of the talk is the bit people will remember most.

Wearing a watch on stage also helps you keep track of timing so you can adjust on the fly – if you notice halfway through your time slot that you’re not quite yet halfway through your prepared material, you can adjust by speeding up a bit, cutting digressions, etc.

Pick A Niche Subject

It is possible to do an interesting talk about something fairly commonplace and widespread, and it’s possible to do interesting and useful tutorial talks, but it’s quite hard, and my recommendation is to wait on it until you’re more comfortable speaking. Even today I tend towards niche subjects, because it guarantees an easy win.

If people come out of your talk having learned something interesting and new, you won at speaking. If your subject is something interesting and niche, this is practically guaranteed because by definition a niche subject is something that not many people know about.

There are a bunch of ways to find good niches:

  • Do a talk about something you’ve personally experienced – e.g. interesting debugging war stories.
  • Dig into the details of something commonplace to a much greater degree than most people know about. e.g. talk about an aspect of language internals, how some sort of software works.
  • Pick a weird interest of yours, or something that you’ve learned in the course of a specialised job.
  • Talk about something from another field you’re in and how it relates to the conference subject (or don’t bother to relate it if the conference has a relaxed policy on talk subjects!)
  • Talk about a skill you wish were more widespread – e.g. communication and writing skills.

It’s a little hard to give specific advice about this because everyone is different in what niches they know and like, and what they’d feel comfortable talking about, but my recommendation would be to write down a long list of things you could maybe give talks about (aim for quantity not quality: Don’t worry too much about whether it’s a good idea until you’re actually ready to submit it to a CfP!) and then shop some around friends and coworkers to see what they think: The ideal reaction is a faintly puzzled “Huh. Yeah, I guess that would be a good subject for a talk.”

Don’t Take Questions

At some point in your talk (probably the end), say something along the lines of “I will not be taking questions at this time, but if you want to talk to me about this I’ll be around for the rest of the conference”. If you don’t feel confident saying it, that’s fine, ask your session chair to say it for you.

It’s a bit of a power move, so it makes sense to get help, but it’s one that is absolutely the right thing to do for most talks, and by doing it for your talk you encourage others to do the same for theirs.

The reason for doing this is twofold: Firstly, answering questions on the spot is hard and stressful, so why do it if you don’t need to? Secondly, most people are bad at asking speakers good questions, so the Q&A session is generally low value and is safe to ditch.

Note that you will probably not be able to pull this one off at an academic conference. It’s genuinely fine at most industry ones.


Public speaking is hard, and it’s OK to be stressed about it – it’s practically expected. Do whatever you need to do to make your life easier. Try to stay calm on stage if you can, but don’t worry about feeling nervous if you can’t.

Ultimately, the audience is on your side, and nothing too bad will happen even if things go wrong.

Your goal as a first time speaker is to give a pretty good talk, and to become a second time speaker, and a third, and a fourth, and so on. Take any affordances you find helpful to get to that. Those can be the ones in this post, or they can be anything else that works for you.

This entry was posted in Uncategorized on by .

Vocabulary Building: Satisficing

Epistemic Status: Not the best explanation of this it could be, but a good enough explanation.

This post is a bit of an experiment, in that it exists mostly to teach you a word I find useful (in accordance with the one weird word rule), and explain why it’s a useful word to have.

The word (as you might guess from the title) is satisficing. Satisficing is the strategy of trying to find a solution to a problem that is good enough, without worrying about whether the solution is the best.

The word comes from a portmanteau (mixing) of the words satisfy and suffice. It sounds a bit funny, so is probably not the best word that could have been chosen for the concept, but it’s the one that managed to get traction so it’s probably a good enough word for it.

Satisficing is defined in contrast to optimising, where you are seeking to find the best solution. You can think of them both in terms of the following brute force solutions: An optimiser tries every solution and at the end picks the one that is best, a satisficer tries every solution until it finds one that is good enough and then immediately stops and uses that.

It may seem that optimising is obviously better than satisficing, because optimising gives you the best answer and satisficing merely gives you an OK answer, but in fact there are many circumstances under which one should prefer to satisfice.

The two main advantages of satisficing over optimising are:

  1. It’s much less work.
  2. You can usually satisfice for multiple things, but you can rarely optimise for multiple things.

The first part is fairly straightforward: Any optimising process for finding the best solution can be turned into a satisficing one by just stopping early as soon as it finds a good enough solution, so satisficing has to be less work than optimising, because it skips a lot of the process of finding the best solution and verifying that it is the best.

The 80/20 rule (80% of the benefit comes from 20% of the work) is a rule of thumb based on this: If you set your threshold for good enough at 80%, then by satisficing instead of optimising you can save 80% of the work.

Another example of satisficing solutions is coin flipping to make trivial decisions. If the difference between two choices is small, you might as well just pick arbitrarily if the work of deciding is going to be more than benefit of picking right.

The low cost of satisficing is important, but the fact that it combines well is perhaps more interesting.

The big problem with optimising is that it results in solutions that are fragile – almost any change you make to them will mean they are no longer the best solution. This means that attempting to optimise for one thing will usually prevent you from optimising for another thing, unless the two are very tightly related.

To see this in practice, say you want to minimize the cost of some widget. In order to squeeze every last penny out of the production process you end up making a lot of decisions in support of this, and the result is you are now very constrained. You have almost no wiggle room. Suppose you now want to also maximize quality – almost every change you can make to your hyper-optimised solution will make it more expensive because you spent so much effort optimising it, so by trying to improve quality at all (let alone optimise for quality) you now end up exceeding your optimised cost.

In contrast if you set yourself a budget and a quality threshold, these two might be in tension but they’re not necessarily mutually exclusive. You can’t necessarily satisfy every combination of them (there’s a reason for the scope/cost/time project management triangle), but by giving yourself more slack you have a wider range of solutions, so it is at least possible to find reasonable combinations that you can satisfy all of.

You can also use some combination of goals to try to optimise for multiple targets: e.g. deciding you’re willing to pay 10p per unit quality, so now you’re optimising for quality – cost / 10. This is a perfectly reasonable thing to do if your optimisation problem is one that is sufficiently well defined that you can hand it to a computer (and you are prepared to tinker with your weightings a bit until you’ve found a solution you like – itself a form of satisficing!). The result will not simultaneously optimise for all the scores, but it will generally be pretty good at all of them because it expresses how willing you are to trade off different scores against each other.

Another example of having multiple competing goals is compromise and cooperation. If you have two people trying to achieve their own outcomes, it’s rare that you will be able to achieve agreement about the best outcome, but commonly there is some shared outcome that is good enough for everyone. Getting the group to agree to that might not always be easy, but at least the satisficing solution exists at all!

These two benefits of satisficing often play well together, because the fact that satisficing can be cheap means you can play around with a number of different combinations by repeatedly tweaking your requirements. This can be a useful exploratory process for finding out what you actually want out of the situation.

An example where I often deploy this in practice is public speaking. The basic constraints to satisfy for a talk are:

  1. It has to make sense.
  2. It has to fit its time slot.
  3. It has to convey something interesting.

As long as a talk satisfies those requirements, it’s good enough and I’d be happy to give it, but I tend not stop there. For example each time I give a talk I try to improve my slide game a bit over the previous talk (slides are one of my weak spots – a visual designer I’m not). Once I’ve practiced it, I spot things that are weaknesses and try to improve on them. After a couple of iterations, I’ve found a talk where I think it’s good enough. So satisficing doesn’t have to mean you stop looking for better solutions and improving over time, because what counts as “good enough” is under your control.

In general, I find satisficing is a much less stressful strategy for many things like talks, blog posts, papers, etc. Attempting to optimise creates a constant feeling that nothing I do will ever be good enough because it could always be better, but explicitly aiming for satisficing means that I can be happy with good enough and work to improve my baseline of good enough over time.

This entry was posted in Uncategorized on by .

Being an example to others

Note: I realised I missed my old conversational style of writing, so I decided to resume it on occasion, including for this post. I will not be using it for heavier posts but I thought it would be nice to be able to code switch.

You know that thing you do where you hold yourself to standards which you would never dream of holding other people to?

(If you don’t know that thing, this post may be less useful for you, but it is a trait that is very common among people I know, so I’m confident that this post has an audience)

Anyway, that works much less well than you think it does, and you should probably consider walking it back a bit.

The basic problem with this is that people model their behaviour on those around them. If you are seen holding yourself to a standard, people around you will observe this and follow suit, even if you tell them not to, so by holding yourself to that standard you are implicitly holding other people to it, even if you don’t want to. This is especially true if you are prominent in a community, but it’s true for everyone.

So I suggest the following standard for good behaviour: Behaviour is good if not only if it is good in and of itself, but if it contributes to a culture of good behaviour1.

Behaviour that is good in and of itself but which creates a bad culture should be looked on with extreme caution.

What do you think of when you hear the phrase “I hold myself to standards that I wouldn’t hold anyone else to”? Does it sound like the speaker is being kind to themself, or does it sound like they are probably beating themself up over something that really they should just chill out a bit over?

In my experience it is very much the latter scenario, and if you find yourself doing that I would like to encourage you to try to stop holding yourself to that standard.

A particularly pernicious example of this is people not prioritising their own needs. Prioritising others’ needs over your own feels good and virtuous – you are sacrificing yourself for others, which many people think of as practically the definition of virtue.

The problem is that in doing so you are contributing to an environment in which nobody is prioritising their own needs. When you work yourself to exhaustion, you are not just working yourself to exhaustion you are teaching other people to do the same.

Conversely, behaviour that is neutral and/or mildly selfish on its own merits may in fact be very good if it creates a culture in which everyone feels like they have permission to do the same.

To continue the example of needs: by asserting and respecting your own needs you are giving everyone around you permission to do so. What would you want a friend who is looking exhausted and run down to do? You’d like them to take a break, right?

The problem is that if they take a break without feeling that it is OK to take a break, they will mostly just feel guilty about that. That might still be better than not taking a break, but it’s not a pleasant experience.

What this means is that if you want your friends to take a break, you need to create a culture in which taking a break is seen as OK. In order to do this, you need to take a break yourself!

I find this notion of permission very powerful as a route out of guilt over “selfish” behaviours: you want the best for others, so you want to give them permission to seek it out for themselves, but this requires a culture where that is acceptable, and that requires you to exemplify the behaviour you want to see in others, so by granting it to others you in turn must grant yourself permission to seek the best for yourself.

For many of us, empathy for others is easier than empathy for ourselves, but by looking at the problem through this lens of cultures of behaviour, extending empathy to others requires us to extend it to ourselves. You can think of this as a kind of reversal of the golden rule: Do for yourself as you wish others would do for themself.

Some examples where I regularly use this in practice:

  • I ask “stupid” questions – on the one hand I don’t want to be the person who is wasting everyone else’s time, on the other hand I do want everyone who is confused to be able to ask questions to resolve that confusion. By asking questions myself, everyone else also feels more able to do so.
  • When I am at a social event and everyone is having a good time but also I am very tired and want to go to bed, I say “Thank you for a lovely time, but I am very tired and want to go to bed. Good night, everyone”. At this point half a dozen other people go “Actually, me too” and also go to bed, because they’ve been waiting for permission to prioritise their own tiredness.
  • When something is making me uncomfortable, I say that I am made uncomfortable by it. I could try to tough it out, but I wouldn’t want others to tough it out, so by stating that I am uncomfortable everyone else who is also uncomfortable is more able to say the same, both now and in future.
  • When there is something I would like to happen, I tell people that, so that other people also feel able to ask for the things they like. (In truth, this is the one I find the hardest, but it’s important).

This is also a good place for positive use of privilege: Some of these (especially the asking stupid questions one) are much lower cost for me to do because I’m a moderately high status white guy.

I can’t promise this will magically fix all the guilt that you experience over being kind to yourself, but I’ve found it to be an excellent start.

Ideally of course you should be kind to yourself because you are a person and people deserve nice things, but in the mean time you should also be kind to yourself because the ones around you who you care about are also people, and you need to show them that people deserve nice things.

This entry was posted in Uncategorized on by .


Epistemic Status: Somewhat speculative, mostly descriptive.

What is gender?

That is an excellent question, but it seems to be a very hard one to answer well. Instead I’m going to ignore it. This post should work for most “reasonable” notions of gender.

Instead this is a post about how categorising people into genders affects how we conceptualise them, and how this leads to the creation of gender norms that we then enforce.

I’ll mostly be focusing on binary genders (male and female) in this post, but I’m not making any assumption that those are the only ones, only that they feed heavily into how we reason about gender.


Given a group, we tend to form inferences based on group membership. This is a perfectly reasonable thing to do – if someone is from France, we tend to assume they speak French. When someone votes for a particular party, we tend to assume they support many of that party’s polices (or at least reject other parties’ policies more).

Unfortunately what starts as a set of perfectly reasonable inferences often then plays out very badly in practice – reasonable inferences get exaggerated, and feed in to how we construct the social norms we enforce, often harming the people we stereotyped.

We do this in particular with genders. If a trait is particularly prominent in people we gender a particular way, we form stereotypes around it, and the trait itself becomes gendered.

For example, consider strength. It is simply true that men are typically stronger than women. That’s not to say that any given man is taller or stronger than any given woman (many men are short and/or weak), but looking at group averages the link between gender and height and strength is fairly clear.

We then reverse this stereotype. If it’s true that men are typically stronger, then it’s true that if someone is stronger then they are more likely to be a man. Thus strength becomes gendered – the trait becomes used as a marker of masculinity.

In and of itself this is a perfectly reasonable inference procedure. It’s literally true that if someone is stronger they are more likely to be a man. The problem is that we now erase the underlying data and simply treat strength as intrinsically manly, labelling strength as more masculine even once you have surpassed the typical strength of a man.

These social expectations then lead to enforcement. Men are shamed for being weak and women are shamed for having visible muscles because they look too manly. What started with a reasonable inference about differences between groups has turned into a social norm where everyone is forced to construct their gender to exaggerate the differences.

This enforcement in turn means that the group differences are larger than those we started with – if most people are expending effort to seem more masculine or feminine, the observed difference between them on that gendered trait will be larger than they would be in the absence of enforcement.

Thus we engage in a sort of “gender inflation”, where we take our initial notions of gender and expand them out into a kind of social halo around our original gender categorisation. This inflated gender manifests both in our social expectations and in the actual data we observe.

Small genderings become large

Because of this gender inflation, it is extremely normal to have gendering for traits which is more or less invented out of thin air, because a small gendering occurs which we then inflate it into a large one.

These small genderings can come up in all sorts of ways, but the easiest way is just chance. Culture is formed mostly out of memetic evolution (that is, people copy behaviours from others, and retain behaviours that in some sense work well), and as a result is highly contingent – often the reason why people behave in a particular way is the result of some random variation years back. There’s no intrinsic difference that leads to, say, the distinction between English and French, we just made different choices generations back which have been built on over time.

This contingency of culture can often lead to genderings because of some degree of homosociality – the tendency to prefer same-sex friendships (which sometimes may be strongly enforced by culture). The result is that there are opportunities for different contingent developments to occur between men and women, and that difference then becomes gendered, and gender inflation exaggerates those differences.

Genderings can also just be made up of course. There’s a long history of men theorising major gendered differences where none exist, and often that theorising is all that’s needed to create a runaway gender inflation where that difference becomes real solely because it is enforced. Because access to power is gendered, it is often easy to reshape gendering in ways that serve power.

What to do about it?

If gendering was purely descriptive and there were widespread acceptance that the posession of masculine or feminine traits didn’t necessarily imply much about other masculine or feminie traits, that would be one thing, but unfortunately it goes further than that in at least two ways:

  • People treat gender as predictive. If you have some gendered traits, you are expected to also have other gendered traits. This isn’t intrinsically incorrect, but leads to significant access problems where your gendered traits may open or close certain doors. I’ve e.g. written about this previously in the context of interviewing.
  • People enforce gendering. If you are perceived as a particular gender, you will be punished for not conforming to expectations of that gender. This actually doesn’t work well for anyone, because so many traits get gendered that even if we tick the right boxes on most of them it’s very unlikely we tick the right boxes on all of them. This is similar to some of the issues I talked about in On Not Quite Fitting.

As a result of these two factors, gendering tends to feed in to a lot of systems of control, where we reward people for gendering themselves “correctly” (by adhering to a consistent set of gendered traits) and punish them for mixed gendering.

Figuring out how to solve all of these issues is a rather big task, and I don’t propose to do that in this blog post.

I’ve previously described my utopian position as somewhat gender abolitionist. I no longer think that’s a good idea, because fundamenally regardless of whether we regard people as having genders, we will still regard many traits as gendered because of underlying biological differences, and I think most of the dynamics described above will continue to hold.

I think the current increasingly diverse range of non-binary genders we recognise is very helpful, both for letting people find the “points in gender space” that work for them, and allowing us to have a richer understanding of gender and gendered traits, but I don’t yet know what that richer understanding looks like.

My more modest short run suggestions are:

  • Gender inflation seems like a big deal, and I don’t think the extent of it is widely appreciated or understood. Be aware of its effects and try to damp them down rather than enforce it.
  • This feels like it shouldn’t need saying if you’ve read this far, but stop enforcing gendered traits. If someone exhibits a mix of masculine and feminine traits, that is a perfectly reasonable thing for them to do, regardless of whether that’s because they have a non-binary gender or are just breaking out of stereotypes within their binary gender.
  • In “Rewriting the Rules”, Meg-John Barker suggests that once you get to know someone as an individual you have much higher quality sources of information about their traits than relying on their gender as predictive. I strongly endorse this.
This entry was posted in Uncategorized on by .

Jiminy Cricket Must Die

Before I get on to the main point of this post, let me ask you a question: when reading a piece someone wrote, does it matter if there use of language conforms to the rules of grammar your used to, or is it acceptable if their writing an they’re own dialect?

If you’re like me that sentence made you uncomfortable1. There’s a kind of twitch, a feeling that something isn’t right, a desire to fix it. Right? It feels wrong.

If you’re a Python programmer, I’d encourage you to go look at should-DSL and stare at the examples for a while until you understand how they work to get a similar sense of wrongness.

In his book “Voices: The Educational Formation of Conscience” Thomas Green describes these as voices of conscience. He defines “conscience” as “reflexive judgment about things that matter”, and argues that these voices of conscience are not intrinsic, but learned as part of our moral development through our membership in communities of practice – English speakers, Python programmers, etc.

That is, nobody is born knowing how to write Python or speak English, but in the course of learning how to do so we also learn how to behave as English speakers and Python programmers. By our membership of these communities we learn their norms, and by those norms we acquire voices of conscience that tell us to follow them. Because we exist in many different contexts, we acquire many different voices of conscience. Often these may disagree with eachother.

Importantly, we can acquire these voices for the norms of a community even if we don’t adhere to those norms.

Green distinguishes between three different ways of experiencing a norm. You can comply with it, you can be obedient to it, and you can be observant of it. Compliance is when your behaviour matches the norm (which may e.g. be just because it is convenient to do so), obedience when you actively seek to follow the prescriptions of the norm, and observance is when you have internalised the norm and experience it as a voice of conscience.

To this list I would also add enforcement – whether you try to correct other people when they fail to comply with the norm.

It’s easy to construct examples that satisfy any one of these but not the others, but for example the sentence at the beginning is an example of non-compliance where I am still observant of the norm: I know the sentence is wrong, but I did the wrong thing anyway. Similarly, I am observant of the norm when I notice that other’s usage is wrong, even if I make no attempt to enforce it (which generally I don’t unless I’ve been asked to).

It is specifically observance (and to some extent enforcement) that I want to talk about, and why I think the voices metaphor breaks down.

Let me turn to a different source on ethics, Jonathan Haidt. In his book the righteous mind he presents Moral Foundations Theory, which proposes a set of “foundations” of ethics. I think Moral Foundations Theory is approximately as useful a set of distinctions as Hogwarts Houses2, but a lot of the underlying research is interesting.

The following is a story that he presents to people as part of an experiment in where morality comes from:

Julie and Mark, who are sister and brother, are traveling together in France. They are both on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie is already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy it, but they decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. So what do you think about this? Was it wrong for them to have sex?

Jonathan Haidt, The Righteous Mind, 2013, p45

If you read interview transcripts of people’s reaction to this story (which was deliberately constructed to provoke a disgust reaction), the common factors that emerge are that it feels wrong, and to the degree people can justify it they first of all struggle and then eventually do so on the basis of it being a norm violation rather than being able to point to any “objective” reason why it was wrong (partly because the story was specifically constructed to achieve that – the parties are consenting adults, there is no risk of pregnancy, no harm is done, the story is kept a secret so does not normalise something that might be harmful in general even if it’s not in the specific case, etc.). People make their judgements about morality based on intuitive, emotional, responses to the scenario and then try to reason their way to that conclusion.

It is useful here to have the distinction between a belief and an alief. A belief is something that you think to be true, while an alief is something that you feel to be true (they’re called a-liefs because they become b-liefs). Haidt’s research suggests that aliefs and not beliefs are the foundation of people’s moral “reasoning”.

This then is the source of my disagreement with Green’s metaphor of a voice of conscience: Conscience doesn’t start with a voice, it starts with a feeling. There is then a voice on top of that, allowing us to reason about and navigate the norm, but the feeling is what gives the voice its power. Without a felt sense of conscience, the voice is just knowledge that this is a norm that should be obeyed if there are not to be consequences. Once the consequences go away, so does obedience to the norm, but if you have learned the feeling of conscience then it will linger for a long time even if you leave the community where you learned it.

How do we acquire these norms (Green calls the process normation)? Bluntly, operant conditioning.

When we obey the norm, we are often rewarded. When we disobey it, we are often punished. Sometimes this is enforced by other people, sometimes this is enforced by reality, sometimes this is enforced by our own latent fears about standing out from the crowd (itself a feeling of conscience that we have acquired – standing out makes you a target).

The conditioning trains our habits, and our feelings, to comply with the norm because we learn at an intuitive level to behave in a way that results in good outcomes – behaviours that work well are repeated, behaviours that work badly are not, and we learn the intuitive sense of rightness that comes with “good” behaviour from it.

So what does conscience feel like? Conscience feels like following the path in the world that has been carved for you via your training. When you stick to the path, it feels right and good, and as you stray from it the world starts to feel unsettling, and even if you no longer fear being punished for it you have learned to punish yourself for it.

This may sound like a very cynical framing of it, so let me just reassure you that I am not about to start talking about “slave morality” and how we should throw off the oppressive shackles of society’s unreasonable demands that we should be nice to people.

But there is an important point in this cynicism: The process of conscience formation, and the feelings that result, are morally neutral.

The ways that we learn the rules of grammar are the same as the ways in which we learn that harming people, are the same ways that people learn that, say, homosexuality is wrong. We might learn these through our memberships of different communities, and we certainly learn them with different strengths, but we acquire them through broadly the same mechanisms and acquire broadly similar senses of rightness and wrongness through them.

Over time, we are part of and pass through many communities, and accrue senses of conscience from them. Because of the shared felt sense of conscience, we cannot necessarily distinguish between them, and we end up with an underlying muddy sense of rightness and wrongness with no clear boundaries or sense where particular parts come from. Some things feel like the sort of thing you’re supposed to do and some things don’t.

Much of this underlying sense of conscience is bad and needs to be destroyed.

Because the formation of conscience is a morally neutral process, the consciences that we form may be bad.

How often does this happen? Well, consider this:

  1. We learn our consciences from the groups in which we are embedded.
  2. We are all a part of society.
  3. Society is really fucked up.

As we go through life, we pass through different spaces, and learn their norms, and then when we leave we drag the consciences that we learned there along with us. Many of these spaces are broken, and we learn bad norms from them that we have to break out of if we want to grow. e.g. “men shouldn’t express their feelings” or “I’m not allowed to set boundaries”.

As we grow more morally sophisticated (which is not necessarily the same as more moral) we come to understand that there is a distinction between “feels wrong” and “is wrong”, and that just because we react to something with visceral disgust doesn’t mean we should necessarily consider it immoral.

As a result we separate ourselves from the feeling of conscience and privilege the voice of conscience. If we can’t explain why something is bad, we say things like “Well I wouldn’t do it but obviously that’s fine if you want to”. Sometimes through gritted teeth, sometimes while genuinely managing to convey the impression that you should do you.

At this point we pat ourselves on the collective backs for having become enlightened and moved past those ugly primitive urges and archaic social constructs. We’ve moved from listening to the feeling of conscience to the voice of consicence, and because voices are the tool of rational analysis we think we have moved to a higher level of moral understanding.

We haven’t. This was the wrong thing to do. It is at best a transitional step, but more commonly it is a dead end.

The problem with this strategy is that it conflates enforcement with observance, and aliefs with beliefs. We think that because we have stopped enforcing the norm we have stopped observing it3, and we think that because we no longer believe something is immoral we no longer alieve it.

It is important to bring our moral beliefs and aliefs into alignment, but the way to do that is not to suppress our feelings on the subject. Suppressing feelings doesn’t make them go away, it buries them and increases the associated trauma. Disassociating from ourself isn’t the path to becoming a more moral person, it just causes us to build up trauma in the areas that we’re ignoring.

If we want to continue our moral development past a certain point, we need to learn the skill of navigating our conscience, and that means getting to a point where we can put aside the voice of conscience and look deeper as to where the feeling comes from.

Instead of flat out declaring that our moral intuitions are wrong and we shouldn’t feel that way, if something feels wrong we need to be able to approach that feeling and ask questions of it. The most important questions being where, why, and from whom did I learn that this was wrong?

This doesn’t make the feelings go away. Making the feelings go away is not the goal, at least initially. The goal is to get a handle on them, and to get to the point where you can make progress. Rather than being in constant conflict between voice and feeling, you can start to navigate between the two, and hopefully eventually come to a healthier position where your moral aliefs and beliefs line up.

This is extremely hard. I wish I had nice, easy, shortcuts to offer on how to do that, but I don’t. I’m still figuring all of this out myself.

I think the basic starting point for solving this is getting better at engaging with our feelings at all. I’m still processing how best to do that, and I owe you one blog post about this, but for now I’ll hand you over to this Twitter thread I wrote yesterday.

Once we’ve done that, the next step is to do the work. Ask ourselves questions, learn why we feel a particular way. If we still think the feeling is valid afterwards, that’s great, we’ve learned more about our moral intuitions. If, on reflection, we decide that actually this moral intuition comes from a bad place, we can start to examine it and, from a place of safety, start to unpick some of the moral damage that was done to us.

This entry was posted in Uncategorized on by .