Category Archives: rambling nonsense

The moral argument for rationality

Before you read this post, I want you to do something for me.

Find yourself a hammer. If there’s not one readily convenient, it doesn’t matter too much. Anything readily wieldable and fairly heavy will do. I have a butternut squash near me. You could use one of those. If you really don’t have anything to hand and can’t be bothered to go find one, you can just use your fist I suppose.

Now, place your left (right if you’re a lefty) hand on the table in front of you, take the hammer-like object in your other hand and bring it down really hard on the hand you’ve placed on the table.

Done? OK. Read on.

You of course didn’t do this. If you did, I’m really sorry. You should probably take that as a lesson not to trust advice without thinking about it for yourself, or maybe just a lesson that I’m a bit of a dick, but I owe you a cookie. Or a hug or something. My bad.

The rest of you, though, why didn’t you do it?

Well, because it would have hurt.

You don’t need some complicated theory of inferential reasoning to tell you that hitting your hand with a hammer hurts. You’ve got a straightforward feedback process in which your body goes “OW” when heavy things impact you at high velocity. This isn’t news.

The thing that makes making predictions about the world difficult is the quality and strength of the evidence you can gather.

We conduct complicated double blind trials around medicine because it’s really hard to gather accurate and informative evidence around drug effectiveness – effects are statistical, subtle, and prone to confounders like the placebo effect.

We do not conduct complicated double blind trials around whether it’s better to jump out of an airplane with or without a parachute, because it’s really not hard to gather accurate and informative evidence about this. People who fall from high places without a parachute tend to die. People who fall from high places with one tend not to.

Issues which affect you personally have a direct evidence gathering built into them. You generally know how you feel, and you experience the results of things directly. You also have a lot of data points, because you’re on the job being you 24/7 with no holiday time.

Issues which affect other people however are much murkier. Unless you possess secret telepathic powers, you don’t have a direct hotline into their brain and you don’t know how they’re feeling. They might tell you, but by and large people are pretty well conditioned to not do that because it makes them vulnerable and because right after you’ve hurt someone is not the time when they’re feeling most inclined to trust you. You see less of any given one of them than you do of yourself, and they’re all different and confusing.

So determining activities that harm or help yourself is relatively easy, and determining activities that harm or help other people is relatively hard. In order to do the former, some simple common sense reasoning and learning from experience is more than sufficient. In order to do the latter, you need to do a reasonable amount of careful study and control for a lot of confounders.

I’m going to let you in on a spoiler: You hurt other people. This is most likely not because you’re a bad person, it’s just a thing that happens. Sometimes you do it purely by accident. Sometimes you do it because we live in an unjust society and we all implicitly support it in one way or another. Sometimes you even do it with the best of intentions.

Most of this you don’t notice because you don’t have that direct feedback – you can’t feel what it’s like to be that other person, so you don’t get a direct experience of the consequences of your actions on them. Some of it you deliberately don’t notice because you don’t want to.

Hurting yourself though? You pretty much know when you’re doing that. It’s not that we don’t do it, and it’s not that we never lie to ourselves about the fact that we are doing it, but most of the time when we do things that hurt ourselves we do so not out of ignorance but because we’ve made a conscious decision to do something painful. This isn’t always a good idea, but it is at least one made in relatively full possession of the facts.

By and large, we don’t like pain, and we’re reasonably good at avoiding it. As a result, I think it’s fair to say that the majority of us hurt other people at least as much as we hurt ourselves.

I think it’s also fair to say we don’t generally want to be hurting other people (setting aside people who enjoy being hurt and explicitly consent to it as a special case). If that’s not the case for you then… well. I’m not really sure what to suggest.

How do we stop hurting people? Or, at least, reduce how much we are hurting people.

The first step to is to understand when you’re doing it. The second step is to be able to predict whether a set of actions will do it.

That is to say, these are the skills of gathering evidence about the world and making predictions on the basis of that.

These skills are often lumped under the heading of “rationality”, or “empiricism”.

They are useful for bettering your life, but as previously mentioned you’re already awash in a sea of evidence about what causes harm or good in your life. It’s not that these skills aren’t useful here, but you’re certainly a lot closer to the point of diminishing returns than you are in cases of scarcer evidence and more complicated situations. i.e. other people.

This gives what I regard as the moral argument in favour of rationality:

It is easy to go through life being unable to accurately predict the consequences of your actions because you’ve got a rough and ready set of heuristics that mostly keep you out of harm’s way. To some degree you’re even actively encouraged to do so – ignorance really can be bliss, and understanding the world around you and being able to predict the effect of your actions will not necessarily make your life better. It may even make your life worse (more on that in another post). The prime reason to learn this skill is not to make you understand the consequences of your actions for yourself, but to understand the consequences of your actions for other people. It is a necessary skill if you want to understand how you affect the world and how to make it a better place for other people to live in.

Importantly, it also gives what I regard as the moral caveat to rationality:

You are causing at least as much harm to other people as you are to yourself.

You are developing skills which are more useful at understanding and predicting that harm for other people than they are for yourself.

Therefore, one of the consequences of improved rationality should be that you should be learning more about how your actions affect and hurt other people and how to make their lives better than you are about how to improve your own life.

If you find this is not the case, then what you are doing is not rationality, it is merely masturbation. It may feel good, and as long as it doesn’t become an unhealthy obsession there’s certainly nothing wrong with it, but it’s not exactly very productive is it?

This entry was posted in life, rambling nonsense on by .

An interesting experiment that no one seems to have performed

A thing I am interested in is group problem solving and group intelligence. When pondering this, the following interesting experiment occurred to me. I can’t find any evidence of it ever having been done unfortunately (yes, I could run it, but I’m not really in the right career for that…):

The basic idea is simple. You take some sort of untimed intelligence test – e.g. Raven’s Progressive Matrices might be a good choice (I don’t know enough about psychometrics to say for sure if it is or not).

You now take a largish sample of people. Each of these people takes the test.

Now you do something different. You pair the people off, and they take the test collaboratively. i.e. two people are put together in a room with the test and are asked to solve it together.

The question is this: How does the score of pairs relate to the score of individuals? Does this depend on any priming on how to collaborate you give them? (e.g. if one person has absolute deciding factor versus if they flip a coin on disagreements). Is it larger than the maximum of their two scores? Smaller?

You’d have to do some careful experiment design to watch for training effects – e.g. do half of the individual tests after the paired tests rather than before and see if it makes a difference – but I think most of these problems can be overcome with careful experiment design.

This entry was posted in Ideas, rambling nonsense on by .

Some ill thought out musings about identity

Up front warning: Please read this post carefully. I am exploring viewpoints, not espousing them.

Second up front warning: All of this is very amateurish thoughts on the subject. Other people have no doubt thought more deeply and more sensibly on the subject than I am.

Let me tell you about something that happened to me as a kid.

I was playing by myself in the garden. Running around, jumping, etc. Energetic “I’m too young to know about getting tired” kid stuff.

One of my jumps was… surprisingly high. Not like “Leaps tall buildings in a single bound” high, but still unreasonably so: Maybe a few times my height.

That was pretty cool, so clearly I needed to try it again.

After some practice, I found I could basically “push” off the ground, and it would give me a huge boost to my jump. It was an action that felt a little bit like extending your hands downwards and shoving, but not quite. I couldn’t get it to work reliably, and it had an annoying tendency to fail to work when I was nervous about it (e.g. when trying to show it to other people), but I could get it to work about one time in 5 normally.

Then something really cool happened. I was doing a particularly high jump, and at the top of it it felt like something in my push just caught, and rather than coming back down to earth I just kept going up. I found I could use the push to control my height and direction pretty easily, and pretty soon I was merrily flying around. This was exactly as awesome as you’d expect.

Obviously, none of this actually happened.

Or rather, it did happen, but I was asleep at the time. I got this dream a lot, it felt incredibly real, and it had extremely consistent rules and mechanisms for how it worked. It even came with its own built in explanation for why I couldn’t seem to do it when awake (that I couldn’t make it work reliably in the first place).

It’s not like I actually believed I could fly when I was awake. I knew the difference between dreams and reality. But you know that thing where you have an incredibly vivid dream about a mundane thing and you wake up and you’re not 100% sure if you’re remembering a dream or a reality until you’ve been awake for a bit longer and sorted the details out? It was a lot like that. There were a lot of mornings where I had to think hard to remember whether or not I could fly.

And, at some fundamental level, I still kinda believe that I can.

I mean, I know, physics. Also biology. Science in general is basically conspiring to ruin my fun here. I know with 100% certainty that I do not possess the ability to fly, but there’s still that nagging feeling that it’s there.

This feels fundamentally different from just fantasizing about being able to fly. I’d love to be able to teleport, or to read minds, or any one of a million super powers that comics have told us are totally a thing people can do, but there’s no sense that I should be able to (actually I have much the same feeling that I should be able to move things about with my mind, for much the same reasons. I similarily have no actual belief that I can do this). It’s not that I want to be able to fly, it’s that it feels like I should be able to fly.

Why do I bring this up?

Well, this all started with a discussion the other night with my friend, Kat Matfield (who is very good at forcing me to think about things).

I like to form mental models of how people I disagree with could think by seeing if I can imagine a way to adjust my beliefs to agree with them. This is not really intended to produce accurate mental models – I don’t need to make predictions off them, and I don’t expect predictions made off them to be correct. Their goal is to take a position that I cannot imagine a reasonable person holding and turn it into one I can imagine a reasonable person holding. It forces me to take them seriously, and thus means that if I need to engage with their beliefs there’s a better chance that I’ll actually try to understand where they’re coming from rather than just dismiss them out of hand.

The subject of Otherkin came up (I’ll get to how later), so naturally I felt the need to come up with a way in which their beliefs were plausible.

What are their beliefs? Well. It’s a collective term for people who believe they’re not actually human. Examples include people who believe they are elves and people who believe they are animals. Other related believes are multiples (who believe they are multiple people), fictives (who believe they are specific fictional characters) and factives (who believe they are specific other real people).

I can’t really figure out what would cause me to believe one of these things. However the flying thing feels analogous: It is (sortof) a belief about myself that does not correspond to physical reality.

This feels like a good starting point. If I can justify claiming that flying is part of my identity, I feel like otherkin and their ilk become plausible even without sharing their specific belief. Are flyingkin a thing? I don’t know. I don’t really care. The goal is a working analogy, not true versimilitude.

So let’s see where we can go with this.

I have this innate feeling that I can fly. Is it thus reasonable to say that being able to fly is part of my identity?

Well, no, because I don’t actually believe I can fly.

I can more or less imagine coming up with plausible excuses for that. Like maybe technology has stolen all the magic from the world or something, and that’s what’s stopping me from flying. But I don’t really believe that either, and I’ve trained myself well enough and know what belief in belief likes that I don’t have any real way of imagining believing that that current-me wouldn’t just label that belief “And then I became stupid”, which rather defeats the object of this experiment.

So instead I’m going to take a different tack and question the nature of identity.

Now lets talk about gender.

Suppose, for the moment, that gender reassignment surgery was physically impossible (or, less drastically, someone has a medical condition that prevents them from having it). Suppose they nevertheless consider themselves as being a different gender than the sex they were assigned at birth. Do we consider this valid?

Well, yes.

Trans* is not dependent on gender reassignment. We (well, many of us) have accepted that gender is a social construct distinct from sex, and that your gender is a matter of personal choice. Many people who identify as a gender which is distinct from the sex they were assigned at birth have not and will never have reassignment surgery, and that’s fine.

I’m treading on eggshells a little bit here, so I just want to remind you to read carefully again. The above is what I believe. Nothing that follows should be taken to mean otherwise (even the later bits where I start to explore how one might disagree with this point of view).

Now… here’s the thing. The fact that we have decided that gender and sex are distinct things is itself a social construct. Neither “gender” nor “sex” are real things. They are fairly fuzzy and complicated labels for things, and historically they are labels which have had the same meaning.

When we decided to split “gender” from “sex” we took a label that included a mix of roughly correlated social, mental and physical traits and split it into two labels. One covered the mental and social traits (who you are and how you present), one covering the physical one (the implementation details of your body).

And not everyone is on board with this split. Many people just don’t understand that it’s a thing, many people actively disagree with it (more on the latter later).

The result is that if you talk to one of these people and say “I am male”, they hear a much larger set of implications (including “I have a penis”) than you might have intended to convey.

Is it possible that we’re in the same boat as this person? That if I were to say “I am a flying person” and you were to hear this as “I can literally fly. Wheee!” it would be the same as if I were to say “I am male” and you were to hear this as “I have a penis”? (It happens that you would have reached a correct conclusion in the latter case, but that doesn’t make the inference valid).

If the ability to fly in some way feels part of me, and we have already accepted that identity is a concept that is only loosely connected to the physical concepts from which it appears to originate, is it unreasonable of me to say “I identify as a flying person”? Or it is unreasonable of you to reject that?

To be honest, I don’t know. In practice, I can’t shake the feeling that anyone who tells me that being able to fly is part of their identity is just making shit up. But I’m aware that “making shit up” is also what many trans people are accused of, so I feel like I should at the very least treat that reaction as a warning sign to be questioned.

In the interests of disclosure: I have met someone in the past who believed she was a fairy. I don’t know if she identified as otherkin or had come to the conclusion independently. I (think I) stopped short of being a complete asshole about it, but I was certainly rather less charitable than I might have been about it.

I should also say that I rather expect all this beautiful theorising to be ruined by the arrival of actual otherkin telling me that no they literally believe that elves are a real thing and they have magical powers. I draw the line at seeing empirically false viewpoints.

I actually came at this line of reasoning from the other side: Trying to get inside the heads of people who do not accept the validity of trans people.

To a certain extent, all of the above can be regarded as a reductio ad absurdum for transgender. It’s very easy to see how (possibly just by finding myriad actual examples of people doing this. I haven’t actually looked, but I’m sure someone has done it for real) someone could make the slippery slope argument “If you accept that gender identity is a different thing from physical sex, it’s only a few short steps from allowing people to believe they’re elves!”

I don’t buy this argument. As I’m fond of saying, the problem with slippery slope arguments is that once you start accepting them you open yourself up to accepting all sorts of other fallacies too.

But it’s easy to perceive this as a sort of sliding scale, where at the one end you’ve got people who don’t acknowledge that identity is fluid enough to support your gender being a distinct thing from your physical sex and at the other end you have people who believe thinking you’re an elf is totally OK.

And if the elf example doesn’t do it for you, it’s likely that some more extreme example does. Consider someone who is a factive (they believe they are another real person) who tells you that they really identify as being you. Yes, you personally. They just feel such a connection, it’s as if you’re one person. I don’t know about you, but my answer would pretty much be “No, you’re not. Fuck right off and don’t come back”. Or imagine a cis straight white guy telling you that sure they’ve got all this privilege and all, but they really identify as being a trans black lesbian. It’s hard not to react to that by thinking the person in question is a bit of an asshole.

The point I’m making is that there’s a line to draw. This line is probably fuzzy and movable, and there are going to be some massive gray areas but you’re probably drawing it somewhere. Once you’ve accepted that it’s not so hard to imagine how you might draw the line in some different place.

A thing to note is that “a different place” does not necessary mean that there is a single linear scale. One person might think that multiples are fine, but fictives and people who think they’re elves are just way too out there, but anything where you’re identifying as a real thing is OK. Another person might go “Well, I don’t get it, but if it makes you happy that’s cool” to fictives and otherkin but go “No, sorry. You are not transracial. You’re just being an asshole”. The space of identities is murky and complicated and a lot more than a single scale from more extreme to less extreme.

For me the boundary is basically defined by three things, in order of decreasing importance:

  1. Are you causing harm to you or others?
  2. Do you believe things which are empirically false?
  3. Can I take your claim seriously? (I’m not proud of this one and don’t really think it should be a factor, but in practice it ends up being one anyway)

The empirically false thing requires some further refinement.

“I believe I should be able to fly” is not an empirically false statement, regardless of whether it is impossible for me to acquire the ability to do so. “I believe I can fly” is one, and is likely to be dangerous. Similarly “I believe I have a penis” may be an empirically false statement (though is probably none of your business if we’re having an argument about it! Also, it’s been pointed out to me that due to deformities and intersex conditions, it literally may be a matter of debate as to whether or not what a given person has counts as a penis), but “I believe I should have a penis” is not one.

This then leads to the interesting consequence that you may hear a statement as an empirical prediction when it is not one. If you hear “I am a man” as “I have a penis”, you may be hearing a statement you believe to be empirically false but which is in fact not because you are using a different definition of terms from the person making the claim. Care is required.

That aside, there is a single overarching thing which trumps all of these:

Is this a situation in which it’s OK for me to express an opinion on this?

This doesn’t affect what my opinion is, but it may affect how I express it. I am not the identity police. I am an opinionated know-it-all, so I probably err on the side of expressing an opinion where I shouldn’t, but if it’s someone I don’t know very well then making a judgment about whether they’re causing harm to themselves or others is rather tricky. For all I know their beliefs about their identity are a coping mechanism that is preventing them from doing far more destructive things and simply blundering in, flailing around and going “I know you believe you’re an elf, but have you considered science?” is going to cause far more harm than good. I may also be a poor judge of harm. Some people think trans people cause themselves harm by not accepting their “real” (i.e. assigned) gender. I know they’re wrong, but how do I know I don’t have similar misconceptions?

At the same time, seeing obvious harm and going “Nope, none of my business”, is not cool either. Sometimes interventions are needed, and sometimes people close to the situation are too close to see it, so this is another grey area.

I like to end my articles on solid, punchy, conclusions, but I don’t really have one here. Identity is complicated, and these were some of my thoughts on the subject. Please don’t shout at me, but please do correct me if I’ve got something horribly wrong or am deeply misguided.

This entry was posted in Feminism, life, rambling nonsense on by .

A freemium model for scheduling

As a software developer I like proposing algorithmic solutions to social problems.

As someone with an unhealthy addiction to startups, I like business models (actually I’m not sure that follows…).

As a Brit, I obviously fucking love queuing.

As a leftie, I think socialized healthcare is a fundamental human right and I love the NHS to bits.

So when tell you that this is a blog post about a queuing algorithm based business model inspired by a question of socialized healthcare, you can understand how it might be difficult to convey my excitement about this idea without, frankly, being a bit inappropriate.

It also works well with randomized algorithms, a pet love of mine, so there’s that too.

Best cup of tea ever by alittlething, on Flickr

For some added Britishness…

Continue reading

This entry was posted in rambling nonsense, Work, World domination on by .

A heuristic for problem solving

This is a heuristic for how to choose what steps to take during a debugging or problem solving process. I think it is too labourious to calculate in practice, but it might be interesting for either machine learning or for some sort of interactive problem solving system which talks you through the steps (indeed my friend Dave and I have talked about building such a thing. We’ve tentatively settled on the name Roboduck).

I also suspect I’m reinventing somebody’s wheel here, but I’m not sure whose. If nothing else it pretty strongly resembles some of the maths you would use for building decision trees.

It started after I read John Regehr’s how to debug. In it he illustrates the debugging process with what are basically probability distributions over likely causes of bugs.

One particularly interesting paragraph is about how you choose what experiments to perform in order to eliminate hypotheses:

Hopefully the pattern is now clear: based on our intuition about where the bug lies, we design an experiment that will reject roughly 50% of the possible bugs. Of course I’m oversimplifying a bit. First, experiments don’t always have yes/no results. Second, it may be preferable to run an easy experiment that rejects 15% of the possible bugs rather than a difficult one that rejects exactly 50% of them. There’s no wrong way to do this as long as we rapidly home in on the bug.

It occurred to me that, although there was no wrong way to do it, clearly some ways of doing it were better than others, and we can actually make this empirically precise.

Basically, when you have your probability distribution about what you think the state of the world is, it is possible to assign a number that almost literally means “How confused am I?” (actually it means “How much additional information do I expect to need to figure out what’s going on if I proceed optimally?”). The Shannon entropy.

Your goal is to get the entropy of your current set of hypotheses down to 0 (I know what is going on).

This means that a fairly sensible way to proceed is to maximizes the rate of change of entropy (it isn’t necessarily correct because this is a purely local search which you might be able to leap out of by finding a better starting point, but it should be pretty good).

So, how do you choose which experiment to perform? You look at the expected decrease in entropy and divide it by the time it will take you to perform the experiment.

That’s pretty much it, but lets do some worked examples.

Suppose we have \(N\) equally likely possibilities, and an experiment will let us determine if it’s one of \(k\) of them. That is, if the experiment gives one answer we will have \(k\) equally likely possibilities, and if it gives the other we will have \(N – k\) equally likely possibilities. Let’s write \(p = \frac{k}{N}\) for convenience.

Our current entropy is \(\log(N)\) (all logs will be base 2).

We expect to get the first answer with probability \(p\), and the second answer with probability \(1 – p\).

For the first answer we have entropy \(\log(k)\). For the second we have entropy \(\log(N – k\).

So the expected entropy of the result is

\[
\begin{align*}
E(h) &= \log(N) – p \log(k) – (1 – p) \log(N – k) \\
&= p(\log(N) – \log(k)) + (1 – p)(\log(N) – \log(N – k)) \\
&= -p \log(p) – (1 – p) \log(1 – p)\\
\end{align*}
\]

Which it turns out is the negative of the entropy of the experimental result (that is, our experimental result is a random variable which takes one value with value \(p\) and the other with value \(1 – p\) and this is the entropy of such a variable).

This is a nice result which is unfortunately completely false in general. In order to see this, consider running an experiment where we flip a coin and observe the outcome: This experiment has entropy \(log(2)\), which is at least as high as the entropy of the preceding experiment, but it gives us precisely 0 information.

Lets continue with our example and use it to analyze the example in the quote:

Suppose we have an experiment that would let us eliminate 50% of cases. This gives us an expected information gain of \(1\). Now suppose we have an experiment that would let us eliminate 15% or 85% of cases. This has an information gain of \(-\log(0.15) * 0.15 + -\log(0.85) * 0.85 \approx 0.43\). So the former experiment is about \(2.37\) times as good as the latter experiment, and should be preferred unless you can perform the latter experiment at least \(2.37\) times as quickly.

This entry was posted in Numbers are hard, rambling nonsense on by .