You may have noticed I have a category for some of my posts labelled “Open sourcing my brain“.
The basic idea is simple: I appear to be fairly good at thinking. I would like other people to be better at thinking. Here, let me share some of the ways I think so that you can think like me and thus be better human beings.
Setting aside the insufferable arrogance through which I view the world (which you’re probably acclimatized to by now if you read this blog), there’s one major problem with it.
That problem is of course that every single post in it is a complete lie.
Oh, it’s not a deliberate lie. I certainly believe most of what I write in it to be true, at least while I’m writing it. It’s not the sort of lie I tell to you in order to get you to behave more like I want. It’s the sort of lie I tell to me in order to… well, I’m not sure why I do it. The whole problem here is that actually brains are a bit mysterious.
Fundamentally, I am not able to read my own source code. There’s the quote by Ian Stewart that everyone loves which goes “If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.”. I’m not convinced this is true, but it’s a good quote. But in this case it doesn’t matter: I am not reaching the level where I’m properly trying to understand it. I’ve got a smattering of pop neuroscience and psychology, I’ve collected no real data other than my own subjective anecdotal experiences, and really all the entire category is is an exercise in introspective navel gazing. Further, given the known plethora of cognitive biases and over-simplifications the brain performs even when you know to compensate for these, a lot of it is probably just completely false.
But, you know, it seems to work.
Fundamentally, I do seem to be pretty good at thinking. I know everyone thinks they’re above average, and it’s very hard for me to self-assess exactly how intelligent I am (or even if this question has any meaning), but as a rule of thumb if I’m not one of the smartest handful of people in the room then we’re in a pretty intimidating room. Based on external validation, I’m at least good at convincing other people that this is true, regardless of whether it actually is.
And I look at the things I do, and some of them genuinely do seem to help contribute to that intelligence.
And I can then put together convincing arguments for why this is a helpful thing that you should do.
But the fact that I can argue convincingly for something is no real evidence that it’s a good idea. It’s at best weakly correlated with it – I certainly find it harder to argue convincingly for most things I think are obviously bad ideas, but ultimately the primary purpose of verbal reasoning is not to understand the world but to convince other humans to do what you want. The fact that it turns out to be useful as a tool for understanding the world appears to be a happy side effect (note: Refer back to “I don’t actually understand neuroscience and psychology and you shouldn’t believe what I say on the subject” for details). I’m good at verbal reasoning, therefore I’m good at convincing people to do what I want. Therefore you should automatically distrust the notion that just because I’ve argued convincingly that something is a good idea it’s actually a good idea (note: This is another one of those times where I tell you how the trick is done and it works anyway). Unfortunately, I’m not always aware that I’m doing this – it’s very easy for me to accidentally put together a convincing argument for something I only half believe and come away convinced, even in scenarios where I suspect if I’d started arguing for the other case I could have convinced myself of that one too.
And for all I know when I look at what I do that I think helps, I could just be committing a massive act of post hoc ergo proctor hoc. I did well. Before doing well I did this thing. Therefore I did well because I did this thing. Certainly in my head it looks like things like instrumentalist reasoning, mental models, lazy knowledge acquisition, etc. are valuable parts of my reasoning toolkit, but for all I know they’re just symptoms. It could be that I just have a “be smart at stuff” switch turned on in my brain and pretty much anything I do will work.
I sure hope this isn’t the case. I both hope this isn’t the case because I’d like to be able to share useful information about how to work and for the purely selfish reason that I’d like to get better at it. Also I’m terrified of the idea that if I don’t understand how my brain works then at some point it’s just going to stop working properly (there’s a reason Flowers for Algernon is basically the most horrifying book I’ve ever read). But “I really want reality to behave this way, so I assume that it will” isn’t a valid reasoning strategy for achieving true results (setting aside the fact that I have previously argued that you shouldn’t care about true results and that reasoning in restricted universes that look more like the universe as you want it to be is a totally valid instrumental reasoning process). The fact that I want it to be true isn’t evidence that it’s true.
Although, you know, sometimes it is. When what you’re reasoning about is your own brain, it turns out that beliefs can affect performance. e.g. one subject I’ve read enough actual papers to feel moderately confident about (only moderately because I lack the background in the relevant disciplines to know the gotchas. I understand the methodology and the stats. I don’t understand my own blind spots), is that a belief in the expandability of intelligence appears to increase your learning performance. This has been verified in a number of different studies and appears to be a legitimate empirically observable effect.
I’m unable to find any scientific evidence for or against some of my other beliefs about intelligence and how it works, so they remain thoroughly in the category of anecdata, but this particular fact at least is pretty encouraging: I don’t know a better way to ensure that I believe in the expandability of intelligence than to be constantly trying to do it (I suppose if I were constantly trying to do it and failing then that would be demoralising and cause me to stop believing it, but I appear to be pretty good at convincing myself that it’s succeeding. I may even be right).
Most of the time I don’t really worry about this. The question “What if everything I know is wrong and what if I don’t understand anything?” doesn’t seem to be a very useful one to dwell on. But it’s worth keeping it in the back of my mind as a self-check to stop me getting too convinced of my own understanding of these things, and it’s worth your keeping in the back of your mind in the unlikely event that you feel inclined to take the word of some guy on the internet as gospel (even if I do have my own cult).
So ultimately even if a lot of my attempts to open source my brain are convenient fictions and anecdata, I appear to find them useful to write, and I hope you find them useful to read. Although I have little science to back this belief up, I suspect that exposure to different modes of thinking is useful for improving your own. Maybe I’m just kidding myself and this whole exercise is wasted effort on both our parts, but hopefully you at least enjoy the show.