This is a point that I’ve been thinking about on and off for a while and never really come up with or otherwise seen a satisfying conclusion to. It’s in the general category of “sometimes my politics and my epistemology are hard to make play well together”.
Suppose you meet a woman at a conference. Based solely on the fact that she’s a woman, you are 90% sure she’s a recruiter (Note: All numbers in this post are made up for making a clear point and should not be considered accurate). This isn’t you being prejudiced – you’ve met a lot of women at previous similar events and even this one and of those 90% were recruiters. Your judgement that the a priori chance that she’s a recruiter is an entirely accurate one.
(Reminder that I have no idea what the actual numbers are and that in most circumstances 90% is going to be a ridiculous overestimate)
The problem is not this judgement, but how you act on it. Maybe you really don’t want to interact with a recruiter right now so you avoid her, judging that the 10% chance that she’s a dev isn’t worth the 90% chance that you’ll have yet another awkward recruitment conversation. Maybe you do want to interact with a recruiter and you go up to her and talk excitedly about how you’re looking for a job and she’s like “Uh, great? I’m here to talk about consistency in distributed databases. I don’t really have any hiring power”. Maybe you just act surprised when you find out she’s actually a developer.
A useful feminist concept is that of the microaggression. An interaction where each individual instance is a minor thing that serves to reinforce roles and express prejudice in the aggregate. All of the above are examples.
The fact that they’re minor in individual instances but major in the aggregate is part of why microaggressions are so insidious. Because each individual interaction is not in and of itself a big deal, if you only see a few of them you probably don’t perceive this as any real problem. e.g. in the above you might have deprived the pair of you the chance of an interesting interaction, you might have slightly annoyed someone, etc. All of these are strictly worse than not doing them, but they’re also not the end of the world.
The problem is that everyone else is making more or less the same judgement as you. In practice peoples judgement will be inaccurate (usually tending to the overconfident), but in an epistemically optimal world where everyone has perfect reasoning, most people will come to something like that 90% number.
And this is pretty rough for the 10%. They’re now on the receiving end of a constant stream of microaggressions caused by these accurate judgements: The vast majority of people are treating them as if they’re something they’re not, or assuming them to be less competent at their speciality than they actually are.
(Aside: This being a problem does not require you to think that being a recruiter is in any way a bad thing. Recruiters sadly have a bad rep, but the problem here exists regardless of that: Being constantly assumed to be something other than what you are is grating)
Which will tend to mean they stop coming to conferences, and that number is going to get more extreme.
(Additional parenthetical disclaimer: Obviously this is not the only source of problem for women developers at conferences. It is likely swamped by other more serious problems. I have however definitely heard plenty of women developers complaining about things that sound awfully like being on the receiving end of this, so I don’t think this is just empty theorising)
This is the core problem of making accurate judgements about people: Whatever judgement you make will tend to reinforce itself, because it will be based on broad statistical trends, and this will tend to add friction to interactions with people who buck those trends, which will tend to discourage them, and thus counterexamples to the trends you base your judgement on will tend to disappear faster than those who fit the pattern.
And I’m not really sure what to do about this.
Oh, in the conference setting it’s easy enough. The benefits of accurate judgement are low enough that just going “Don’t do that then” is basically enough of a solution. Don’t form preconceptions about what people do based on their gender (or their race, or any of countless other categories) and try to treat everyone the same and you’ll probably just do fine in this case.
But a lot of the time when you’re making judgements about people it’s actually much more important and you do need to make accurate judgements. Consider for example hiring people.
Obviously you should not make judgements like “You are a woman therefore you are less likely to be good at this job therefore I won’t bother to interview you”. Even if this were true (it’s not) it would still be a terrible thing to do.
But a lot of people make judgements like “You do not have a Github profile with lots of open source code on it therefore you are less likely to be good at this job and therefore I won’t bother to interview you”. And guess what: Open source contributions are significantly gendered, due to a variety of cultural problems (women tend to have less free time due to greater expectation of doing house work, child care, etc, and open source is not exactly an inclusive environment). This is somewhat related to what I’ve written about false proxies previously, but is more insidious: It’s almost impossible to come up with metrics that are completely oblivious to certain boundaries (even the “Hire them and work with them for several years and see how you find it” metric isn’t: What if your company is secretly a bit racist and you just haven’t noticed because you’re white? The black colleague you hired is having a much harder time of it than the white one and so you will tend to judge them more harshly even if you yourself are completely ignoring their race).
About the best thing you can do that I know of is screen off certain questions at the individual level when making these decisions (make as many decisions as you can without even knowing about the person’s race, gender, etc. and where you do know it do your best to ignore it), then later go back and calibrate: This question that we screened off… are we actually screening it off? Do we get significantly different results in our process for men and women? Or for different ethnicities?
This is worth doing when you can, but a lot of the time it’s impossible to do. If you’re a small company you probably don’t have the numbers to get good stats. If you’re an individual trying to form opinions about people you can’t do this sort of statistical analysis – you’re not gathering the data, you probably can’t gather the data, and a lot of the time you’re not even aware you’re asking the question.
Which leads be back to “I don’t know what to do”, which is a pretty depressing point to end this piece on. I value both accurate judgements (not just for the sake of them: they’re also necessary for making good decisions and helping people) and not reinforcing structural prejudice, and it’s completely unclear to me how to balance the two. My current solutions are basically just a bunch of patchwork and special cases and I’ve no real idea whether I’m missing important areas or not.
If you’ve got any good ideas, I’d appreciate hearing them.