# Thinking through the implications

There’s a concept called the goes by various names and forms. I learned of it as Maslow’s Hammer, but apparently it’s more commonly known as the Law of the Instrument.

These two forms go as follows:

“Give a small boy a hammer, and he will find that everything he encounters needs pounding.” (the Law of the Instrument)

“if all you have is a hammer, everything looks like a nail” (Maslow’s hammer).

I have mixed feelings about the concept. On the one hand, it’s undeniably true. On the other hand, I’ve never liked the degree to which it feels pejorative. If you have a hammer, solving your problems by hitting things is a great way to learn about the different applications of hammers, which are many and varied.

Imagine being an early human and encountering the concept of a hammer for the first time. Maybe you’ve just used it to crack a nut.

Do you:

1. Say “Oh, cool. A better means of cracking nuts. I like nuts” and go on to fill your belly with lots of delicious nut meat.
2. Think “Oh my god this changes everything”, think up a dozen other applications to tasks that would previously have been hard to impossible, and take the first step in a long journey that begins with basic tool making and continues straight on through some distant future descendant sitting in front of a computer typing these words.

I suspect quite a lot of people did the first before somebody did the second.

Hammers are the precursor tool that lead to all other things. They give you the ability to exert greater pressure on things than you safely can on your own. You can crack nuts and shells, but you can also grind things, you can pound staves into the ground, and most importantly you can make other tools by shaving off fragments of flints to produce blades. Hammers are amazing and I won’t hear a word against them.

But there’s another precursor tool that is more important yet, which lives inside your head. This is the ability to generalise, and with it to consider the idea of a hammer as a general purpose tool with multiple applications. Plenty of animals have hammers. Very few of them use their hammers for more than a few specialised purposes, let alone to produce blades and other tools.

The most obvious example of failure to generalise I’ve ever lived with is Dexter.

Photo by Kelsey J.R.P. Byers

Meet Dexter.

Dexter is simultaneously very clever and very stupid: He is clever because he figured out how to open the door to my flatmates’ bedroom. He jumps up, braces himself on the door handle, pulls it down with his body weight, and then lets the momentum push the door open. If the door isn’t locked it works very well (if the door is locked, he just repeats it until people are annoyed enough by the noise to unlock it).

But he is very stupid because there are  four other doors in the flat that he is sometimes cruelly and unjustifiably trapped on the wrong side of, and he has absolutely no idea what to do about this. One of the other doors is  literally identical in configuration to the one he could open, but it’s a different door, so it must have some fundamentally different magic spell required to open it.

As far as I know he has never even tried his door opening technique on any of the others. If he wants through to the other side, he just sits in front of the door and whines until someone lets him in (this is a much less effective technique).

The problem with Dexter (well, one problem. He has so many) is that although he had figured something out he lacked the ability to think through the implications and apply it more broadly.

In contrast, the problem with a lot of people when solving problems is not that we can’t generalise, it’s just that we don’t.

We learn how to use a hammer to crack a nut, and then we go “Oh, cool. Now I can crack all these other nuts in front of me much more efficiently” and keep on cracking nuts. We open the door, and now we have access to the nice warm bed and are beyond caring about all those other doors we could open.

This happens all the time to everyone – I notice it in daily life, in software development, in mathematics. Any area where you regularly solve problems (which is, to a first approximation, every area) has tools you could be using that people are routinely miss how to generalise.

One reasons this crops up so often is that it’s something of a limitation of learning while doing. If you’re learning because of the problem in front of you, it’s tempting to stay very focused on the problem in front of you and not think through the broader implications of what you’ve learned.

The solution is instead to be the child with the hammer. You don’t have to stop what you’re doing to do it, but once you’ve learned a new technique, spend some time to go around hitting things with it.

Be careful what you hit – some things are fragile – but as long as you’re in a place where it’s safe to experiment then you should. It will give you solutions you might not have thought of, and significantly refine your understanding of what you’ve learned by seeing how it fits into a broader context.

I tend to do this mostly automatically (or, at least, I notice myself doing this a lot. There might be whole categories of things I learn and then forget to do this with), which means I don’t have very good habits to suggest for how to do it, but here are some things I think might help remember to do this:

• When learning by doing, write down interesting things you learned so you remember them later explicitly.
• When you learn new things, try to come up with at least one application of them outside the context you’re learning them in.
• When you encounter a new problem, see if anything you learned recently applies to it.
• When you re-encounter a problem that you previously believed was hard or impossible, spend at least 5 minutes thinking about why it is hard or impossible and if anything you’ve learned since then changes that.

I don’t know how well these work when you do them explicitly, but they should at least be better than not trying them.

Regardless though, try to keep the key idea in mind: When you learn about new things, ask how else you can use them. Think through the implications.

This entry was posted in Open sourcing my brain on by .

# How to make good things

I like to make things that are good. Sometimes I even succeed.

I realised recently that there’s a pattern a lot of my successful attempts (and many of my unsuccessful attempts too. There is no royal road to quality) match that feels different from how other people often seem to go about doing this, so I thought it might be worth writing down and codifying.

As with all such patterns, this post is probably not literally true, but I think it might be quite helpful anyway.

So suppose you want to make something good? What do you do?

“Make” can be a fairly general term here. I use something like this technique for making recommendations to people as much as I do for creating new things. The creative examples are probably more interesting because they’re harder work and easier to get wrong, but I’ll use both for examples.

The system that I have more or less boils down to two main principles:

• Start with examples of things that are good that you don’t want to make
• Define rules for your project and stick to them until you can’t feasibly do so

These principles are mostly designed around the desire to take an unstructured creative process and lightly structure it in a way that doesn’t add much cost but tends to improve the outcome.

When trying to make something, I think most people rely on a fairly unstructured process for figuring it out, which results in a plan of attack that looks something like the following:

1. Decide what “good” looks like for this problem.
2. Make something good.

It’s not an unreasonable plan of attack, and in many cases it will succeed.

Examples of how applying this strategy looks:

1. When recommending a book, just suggest a book you really liked.
2. When writing software, ask people what their problem is and write software that solves that problem.
3. When writing a story, come up with an idea for it and write that story really well.

All of these more or less succeed or fail on the strength of how well your skills fit the problem and how well you apply them to it: If your taste in books is a good proxy for the person you’re recommending a book to, you will recommend books well. If you’re good at writing software and understanding peoples’ problems you will write software that solves their problems. If you’re a good writer, you will produce a good book.

There’s a lot more to say about it than that, but it’s more or less besides the point of this essay. For now I will take it as read that if you’re good at what you do and you execute it well you will mostly produce something good.

Mostly.

The major problem with this plan of attack for me is novelty.

Often novelty makes the difference between something that is good and something that is merely OK.

Example: I actually really enjoy Elementary as a TV show, but I wouldn’t exactly describe it as good.

Because let’s be honest. We’ve seen this show before.

There’s nothing wrong with making the same thing over and over again. It’s a useful skill building exercise, particularly when you’re starting out, and often as a consumer of stuff I don’t necessarily want new and challenging, you just want something fun that you know you’re going to like.

But even within that some novelty is important:

For learning: Making the same large sort of thing over and over again quickly plateaus as a learning technique. Practising a skill repeatedly only works well if the problem is extremely small so you get fast feedback. At a larger scale if you’re not doing something substantially different you’re learning progressively less and less with each repeat.

For consumption: Even within the brain candy category, I think we overestimate how much we want the same thing over and over again. If we really wanted that we’d just watch reruns or reread the book. You know how an author’s second book is often a lot worse than their first book? It’s probably mostly not. It’s just that the spice that novelty added has worn off and you’re seeing a more “objective” measure of how good their work actually is.

So those are the reasons why novelty is Actually Important. It’s a stronger priority than that for me – I just don’t like retreading well explored ground – but even if you don’t go as far as me on that front novelty is still important.

(side note: This will probably prompt someone to send me a link to prior art for the subject of this post as a hilarious joke. Please do. I don’t read enough on this and I’d like to read more, but it’s hard to separate the wheat from the chaff so I often don’t bother)

So, the key to producing something good is novelty? Great. Let’s make something novel.

Unfortunately just setting out to produce something novel also doesn’t work. If you just seek novelty then you will probably fail at the goal of being good. Novelty is easy: Pick a bunch of random things (use dice or cards or computers because humans are bad at random), throw them together, see what happens.

What will usually happen is it won’t work. I can pick a random book and the person I’ve recommended probably won’t have read it, but they probably also won’t like it. I can implement this Facebook for Dogs in a custom Forth interpreter implemented in C using continuation passing style, but it will probably segfault and even if it doesn’t it’s still Facebook for Dogs. I can write a story about ten random characters saving the world from an attack force of flying cheese graters, but at best you’re going to get an absurdist comedy.

Even at milder levels, novelty can often be a trap: You get recommendations that are more weird than good, half-baked prototypes, or stories where the core idea doesn’t really work. (I have done all these things).

So you need to find a way to produce things that are both novel and good. Neither suffices on their own.

You can’t optimise for two things simultaneously, so you need to figure out a way to combine them. One way would be to allow the two to trade off against each other – some amount of novelty can be sacrificed for some amount of goodness – but I mostly find that produces results where you’re not very satisfied with it on either front.

Instead the way I like to think about this is that novelty should never be the goal, but it can reasonably be a constraint. You are not trying to produce something maximally novel, you are trying to produce something maximally good which is also novel enough.

This matches what we’re trying to do much better: Goodness is the priority, but novelty is a necessity.

And the way to work for this is not to seek novelty, but to have a process that produces novelty automatically while you work on making it good. Fortunately, I have one of those to share with you.

The basic starting point is to take this idea of novelty as a constraint and modify the starting procedure as follows:

1. Decide what “good” looks like here.
2. Make something good that is also novel.

Note: This is a bad plan.

The problem with this plan is that the constraint “it should be novel” is too fuzzy. Fuzzy constraints are the enemy – they either get compromised on as you work or you constantly second guess yourself about them. If you try to follow the above plan you will usually end up with something where you’re not really sure it’s good and you’re not really sure it’s novel (regardless of how good or novel it actually is if you’re anything like me on this front). You end up with something mostly like the attempt to optimise for the combination of both but with more self-doubt.

The following refined plan is better:

1. Decide what “good” looks like here.
2. Decide how what you make will be novel.
3. Make something good that is also novel in the prescribed way.

This is better. It solves the fuzzy constraints problem by making them non-fuzzy.

But it’s still not great. The problem is that it forces you to perform most of your novelty up front when you least understand the problem. That’s not really how creativity works – often the most interesting ideas will only occur to you after you’ve been bashing your head against the problem for a while.

But you can get around that quite easily: The idea is to not come up with a specific novel feature out of thin air, but instead to create constraints that when satisfied will automatically produce novelty.

That is, instead of deciding how what you make is going to be novel, you invert it. You decide how it won’t be like the prior art.

You do this by producing a set of principles with which the work must comply, generated by something that looks roughly like the following algorithm:

1. Pick some prior art you like.
2. Find something about it you don’t want to emulate. If there’s something about it you actively don’t like then that’s great, use that. If not just find some other way it would be interesting to be different from it.
3. Create a simple rule that splits the space of possibilities on an interesting axis and avoids that thing.

Iterate this, each time picking prior art that satisfies the rules you have so far, until you’re bored of doing this or you can’t think of anything you like that doesn’t satisfy those rules.

Here are some examples of me doing this:

1. Recommending fantasy that is not like Tolkien and doesn’t contain elves.
2. It is really annoying when QuickCheck and similar fail in erratic ways, so Hypothesis takes as a core design constraint that when you rerun a failing test it should immediately fail with the same error as before.
3. In Programmer at Large, in order to avoid it being like just about every other nerd power fantasy book (which, to be clear, is a genre that I totally read and enjoy as a nerd with power fantasies), the protagonist is specifically designed to not be especially brilliant or competent and just be run of the mill good at their job.

An important thing to note is that you are not picking prior art that is bad. You are picking prior art that is good. It’s easy to pick on examples you hate, but it won’t produce nearly as interesting results. Just trying to avoid being bad is a recipe for mediocrity. The key here is that we’re trying to be good but different.

This solution almost works, but it has two failure modes that it needs adapting to avoid:

1. You might fail to produce something that is both good and satisfies the rules (either because you couldn’t satisfy all the rules at all, or because you could but the result wasn’t very good).
2. You produced something good that satisfied the constraints but it turns out to not be all that novel and in fact is quite similar to something that already exists in ways that annoy you.

The second is only really a failure mode in the sense that you’ve produced something no better than the original method: You’ve still produced something good, even if it isn’t novel enough. You might be happy with that. One worry is that you could have produced something significantly less good than you otherwise have without the constraints, but I don’t usually find that’s a problem – often solving in a constrained problem domain is just a good way to make you think about the problem harder and produces results that are better even when they’re not novel. The relation between creativity and constraints is so well trodden at this point that it’s practically a cliche.

This also tends to mean that in the first failure mode the case to worry about is not really “I produced something that satisfied these constraints but it wasn’t very good”. It’s not that that doesn’t happen, but it tends to happen much less often than you might expect. The real thing to worry about is that you’ve made life too hard for yourself to come up with any solution to all the rules – either because none exists or because it’s too hard to find.

With both of these failure modes you can solve this problem by restarting the process and changing the rules – if you couldn’t satisfy them, figure out some relaxed or different rules. If satisfying them produced something very novel you now have a couple new examples to try to exclude!

Depending on how hard the process is, this might be a perfectly reasonable thing to do. If you’re recommending a book or working on a problem you can solve in an afternoon, this is probably fine. With larger things you can also use whatever is left over from the first iteration as as raw material to stop you from having to start completely from scratch.

But even when restarting isn’t that expensive, it’s probably still best not to do it two much.

The “not actually novel” trap is hard to avoid when you don’t have a good idea of what exists. The best fix for this is familiarity with the prior art, but you can also outsource this – if you can’t think of examples satisfying your rules, ask other people for some! Then become familiar with those examples.

For avoiding the trap of creating too restrictive rules, I find that it’s useful to maintain an existence proof as you build your rules: An example that satisfies all the rules you’ve got so far. This can be an existing thing, or a sketch concept of how you could satisfy the rules. When you add a rule you either pick one that the existence proof satisfies or one where you can find a new example satisfying it and all the existing ones.

The reason why this works and isn’t the same as just solving the problem as you go is this: Your existence proof doesn’t have to be good. In many ways it’s often better if it’s bad because that gives you a starting point for the next part of the creative process: Why is this bad and can I fix that?

Instead you can just solve it through brute force: Solve the problem directly in front of you and do the simplest thing that can possibly work without thinking about the bigger picture. All you’re trying to do is show that a solution is possible, not find a good one right now.

I think of a lot of the early versions of Hypothesis as being like this. The modern implementation of Hypothesis only became possible because it passed through a number of intermediate solutions that were kinda awful but demonstrated that a solution was possible.

So, to put this all together:

1. Maintain lists of rules along with accompanying reference examples, which starts empty, and an existence proof, which is any example of the thing you’re trying to create that satisfies all the rules (regardless of whether it is good or not).
2. Come up with an example of something you like that satisfies all the existing rules. This might be your existence proof or it might be something else. If you can’t think of one, ask people. If they can’t think of one either, proceed to step 4.
3. Come up with a simple rule that excludes the example you came up with, along with an existence proof that it can be satisfied along with all your existing rules (this may be your existing existence proof). Add the rule to your list along with this as its reference example, update your existence proof if necessary, then go back to step 2.
4. Try to create something good that satisfies all the rules.
5. If you created something good, check if you still think it’s novel. If it’s not, go back to step 2 with it as the example. If its, stop. You’re done. Congratulations, you made a good thing!
6. If you failed to create something good, try to figure out what the rule or rules at fault were and see if you can modify them in such a way that they still exclude their reference examples but avoid the blockage you encountered. Then go back to step 2.

I very rarely (never, I think, though now that I’ve written it down I might try it) actually sit down and followed the above steps. I’ve probably never even done something that perfectly executed those steps implicitly – I’m much worse at searching the existing literature than that would imply, even though the results are usually better for it when I do – but I think they capture a core process pretty well nevertheless.

And that process is extremely beneficial. It forces you onto a path which seeks out novelty, and by doing so through creating constraints it does a very good job of inspiring the levels of creativity that are required to create that novelty.

It’s also another technique that is very good for learning while doing. The focus on rules and constraints forces you to learn a lot about how the problem space fits together. Although it elides a lot of details in the steps (how do you produce something good within the constraints? How do you come up with a rule?), by giving you more focused questions to answer and explicitly exploring the shape of the problem I suspect you’ll learn a lot better than just seeking to produce something good will.

Ultimately regardless of how closely you follow the specific steps, I think the ideas here are important, and thinking in terms of them will help many and possibly most creative processes.

(And if you found this useful, and would like me to keep making good things on this blog, you’d be very welcome to donate to my Patreon for supporting my blogging to say so regardless of whether that’s novel)

This entry was posted in Open sourcing my brain on by .

# Underestimating the inductive step

Quoting Mark Jason Dominus:

Ranjit Bhatnagar once told me the following story: A young boy, upon hearing the legend of Milo of Croton, determined to do the same. There was a calf in the barn, born that very morning, and the boy resolved to lift up the calf each day. As the calf grew, so would his strength, day by day, until the calf was grown and he was able to lift a bull.

“No,” said Ranjit. “A newborn calf already weighs like a hundred pounds.”

Usually you expect the induction step to fail, but sometimes it’s the base case that gets you.

A newborn calf actually ways more like 60 pounds, which is only 27kg. For an adult that’s not that hard to lift over your head, but for a young boy it probably is.

But a moderately strong adult is probably going to fail at the inductive step instead.

I can probably lift a newborn calf. If I lifted it every day, I would have a very annoyed calf. But also at some point I would stop being able to.

I offer two pieces of information in support of this: The first is that an adult cow weighs substantially more than any of the world weight lifting records, let alone for lifting something above your head. If this worked, someone would have beaten this record. Therefore nobody will be able to lift the cow when it is an adult, and therefore by the well ordering principle there was a first day in that cow’s life that they could not lift it even if they followed this program.

The second is: Remember when your parents said “Oof. You’re getting too heavy for me to lift”, despite probably having lifted you on a lot of days as a child.

The problem is that incremental improvement only keeps working if the amount of improvement you can get from the current level is larger than the increment to the next level. With something like the calf story it only works if the amount of improvement you can get from a single day at the current level is greater than the amount the calf will grow in that day. Calves grow fast.

This is on my mind a lot recently because I’m working on general fitness and strength again. I don’t plan to lift any calves (I do have access to sheep, but I don’t think they’d like me lifting them very much and I don’t fancy being mobbed by a flock of them), but the same principle of incremental improvement applies. Indeed the book I’m currently semi-following, Convict Conditioning (which has a bunch of good advice, though I find it very hard to take seriously at times), is very explicitly built on this.

But I’m pretty sure that I’m never going to get to the promised end result of being able to do a one handed hand stand push up. As I make it through the program my progress is going to slow. I’ll almost certainly hit and overcome multiple roadblocks en route where different tactics allow me to make progress once the previous ones have plateaued, but ultimately I’m going to hit some basic physical limits of the design of my body.

This isn’t just about physical limits either. The same happens with purely mental skills. I’m not very good at visual design. I would go as far as to say I am awful at visual design. I could definitely improve my design skills to be better than they currently are, and I could probably even arrange things so I improved a bit every day… but the skill level at which I mostly plateau is not actually going to be very high, owing to certain eccentricities and failure modes of how my brain works (mild forms of dyspraxia and aphantasia if you’re interested).

A lot is made of the Growth Mindset. I am… lets say sceptical of some of the claims, but lets take it at face value: You will improve more if you believe in an expandable notion of intelligence than a fixed one. It’s at least plausible.

But even if it’s true it’s still important to bear in mind that the amount of growth is bounded. And that means that over time growth is always going to get harder.

This might be a depressing thought for you. I like to think of it as just realistic, but the boundary between the two can be somewhat hard to find.

But depressing or realistic I think it’s important, and it can and should guide the decisions we make.

It’s easy to let optimistic thoughts like “We only need to grow 1% per day to succeed big!” (usually accompanied by the formula $$1.01^{365} = 37.78$$) get in the way, but you can’t actually keep growing at 1% per day indefinitely. A plan to add one push up each day might result in you hitting 100 pushups, but most people find that at some point adding that extra push up becomes remarkably hard (I’ve yet to make it past 40, and that requires some cheating on my part).

Whenever you’re looking at an opportunity to improve, the inductive step lies to you because you’re starting quite near to the point where it’s easiest (it’s usually not easiest right at the beginning, but you make rapid gains to the point where it is). Incremental growth looks easy, but the height of the plateau is more telling.

So when you want to improve at something, a better question to ask is not “How much can I improve per day?” but “What is a level I would be happy to reach, and what would be a realistic plan to get me there?”. Incremental improvement is certainly going to be part of the answer to the second question, but if it’s the only answer you have then that goal should be pretty close to achievable already.

That’s all I have to say on the subject for now, but I will leave you with another story, due to Raymond Smullyan. You can think of this as the coinductive version of the problem if that’s your thing:

A certain man was in quest of immortality. He read many occult books on the subject, but none of them gave him any practical advice on how to become immortal.

Then he heard of a certain great sage of the East who knew the true secret of immortality. It took him twelve years to find the sage, and when he did, he asked, “Is it really possible to become immortal?”

The sage replied, “It is really quite easy, if you just do two things.”

“And what are they?” the man asked quite eagerly.

“First of all”, replied the sage, “from now on, you must always tell the truth. You must never make a false statement. That’s a small price to pay for immortality, isn’t it?”

“Secondly,” continued the sage, “Just say ‘I will repeat this sentence tomorrow'”.

This entry was posted in Open sourcing my brain on by .

# How to learn a new city

Epistemic status: I’ve no real experience here. This is just what I’ve been doing. It seems to work pretty well? It might not work for you.

So as you may have gathered I’ve moved to Zürich recently. A lot of the last few weeks have been me going “Halp how do I find my way around here?”.

It’s now at the point where I think I’ve got a handle on the basics. There’s still a lot I’m confused by (I can’t even pronounce the street names correctly), but I no longer feel too bewildered about how to get where I need to go.

So, here’s what I’ve been doing. It’s less of a unified theory of exploration and more a collection of helpful things.

The basic principles are:

1. Walk everywhere
2. Use Google maps (or your favourite equivalent), but use it sparingly
3. Be goal directed, but vary your route
4. Keep an eye out for places you’ve seen before

### Walking everywhere

(I’m aware not everyone can do this. I don’t really have any good suggestions there, sorry. The problem of reporting on what I do that works for me without any broader experience is that it’s generally quite tailored to my particular capabilities)

The basic motivation is that I’ve found walking is vastly better for getting to grips with a route than anything else. Cycling (or, I imagine, driving) you have too much coming at you and everything is too fast to properly take it in. Public transportation, while a hallmark of modern civilization, is actively harmful for helping you understand the geography of something. Walking puts everything straight into your spatial memory and helps you get a feel for how it all connects up. Lots of bits of London didn’t make much geographical sense to me until I stopped considering them in terms of which tube station they were near to.

You should also learn the public transport network of course. Sometimes you want to go further than you can realistically walk, and also once you know how to walk from A to B quite well you don’t need to keep relearning that route and can start to think about how to make your trip more efficient. You just avoid public transport when you’re trying to learn the area you’d be travelling through.

### Maps with GPS

This is for three main reasons:

1. Getting a rough idea of a good route before you set out (you should ideally not follow that route precisely, but it helps give you a good sketch to start with) when you don’t really know where you’re going. Don’t do this unless you’re sure you need to.
2. Occasionally confirming you are where you think you are so that you don’t learn wrong information
3. As a panic button

3 is the important one. Basically: The essence of learning is experimentation. The main thing you need to be able to experiment is to be able to afford to fail. If it’s not costly to fail, it’s not costly to try things. Having GPS to tell you where you are and how to get where you need to be means you can never get too badly lost and are thus free to explore. This is really important.

(Note: You should also determine whether any areas of the city are unsafe and worth avoiding. Zürich is very safe so I haven’t had to worry about that. If you’re unsure, ask for advice and only explore when it’s light out)

### Goal directed

This is my standard approach to learning so it’s not surprising I’d promote it here. Essentially I find that asking the questions that you need to learn how to answer and then learning enough to answer them is by far the most useful way to learn a subject. You can explore aimlessly if you want (if you do, I recommend taking a companion along, but that’s mostly because I find aimlessly wandering on my own boring and will tune out and not learn anything).

So yeah, basically you should be asking lots of questions of the form “I am at A. How do I get to B?”. You can use maps for this if you like, or you can go by dead reckoning.

Try not to go by a route you’ve already done. Learning how lots of different bits connect up is very helpful – I find the more connections I know the better the whole thing sticks in my memory and makes sense.

### Landmarks

This is again about the connections thing: It’s very useful to be able to go “Oh, that’s where I am” and tie little bits of knowledge together. These don’t have to be fancy scenic landmarks or anything – I actually find distinctive shops one of the better things for this, though churches and giant cranes are also useful too.

### And the rest…

I’m sure there are lots of other useful techniques. To be honest, I’d love to know what they are, because this seems to be working much less well for my new office than it has for my new city, so maybe there are some useful tricks I’m missing…

This entry was posted in Open sourcing my brain on by .

# My algorithm for deciding what to cook

I don’t really follow recipes. I sometimes read recipes for inspiration, but I rarely end up following them more than even vaguely.

Instead I just haphazardly thrash around until I come up with something to cook. This works surprisingly well.

It occurred to me earlier that the way I do this is a greedy algorithm. As well as being an interesting insight, this is an amusing pun (here let me explain the joke: You see, a greedy algorithm is both a solution finding algorithm from computer science but also someone who eats lots of food is “greedy”. Therefore the humour derives from the double meaning of the word greedy in this context: It is both accurate from an algorithmic design point of view and also carries the hidden implication that you are going to eat lots of food. Is this funny yet? I can explain more if you like).

That is to say it works by maintaining a set of ingredients. The algorithm is then:

1. Find an ingredient which would go well with the existing set of ingredients.
2. If I am in the mode for that ingredient, accept it and add it to the list.
3. Repeat until the current set of ingredients seems like it could be turned into a complete meal.

This seems to produce consistently good results.

The problem is that it requires a good implementation of step 1 in order to function. I think mine is basically performing a rejection sampling on the set of available things (i.e. wandering randomly through the supermarket / browsing through my cupboard / fridge) until I find something that catches my eye and go “Oooh. That could work”. The empty set is a special case here where it requires me finding out what I’m in the mood for.

It also requires being able to figure out what goes well together without actually trying it. Some people seem to find this hard. All I can offer as advice is that a protracted period of vegetarianism in which you can’t eat cheese works really well for developing this skill (at least that’s how I did it). The problem with meat and cheese is that they constitute a dominant flavour for the dish, so it’s very easy to just make them the centre piece and not do much else to it. Without that as a crutch what you will make will tend to be very boring unless you figure out how to make a variety of different flavours work well together, and you’re forced to learn out of survival instinct. It’s not dissimilar to the immersive way of learning a language I imagine.

Here is a dish I made recently that resulted from this:

### Gnocchi with Aubergine/tomato/Caper sauce

Gnocchi is self-explanatory. The sauce is as follows:

• Approx 1 kg aubergine
• approx 50g butter
• “some” olive oil.
• coarse salt “to taste” (sorry, I know. But I have no idea how much I used, except I tend to salt things quite heavily).
• About 5 tsp of capers
• 1 pretty embarrassingly weak red chilli (note: recipe needs more chilli than I used)
• fresh thyme until I got bored stripping it off the stems
• 800g canned chopped tomatoes

It proceeded in two stages. To be honest, I’m a little disappointed in the second stage because the first was so amazing and the end result was merely really good. You might want to stop halfway through and just eat the aubergine bit.

Step 1:

Cut the aubergine into roughly cm cubes. Peel but don’t crush the garlic. Chop the chilli without removing the seeds (you may wish to remove the seeds if you have a real chilli rather than the pathetic imitation chillis I found in a Swiss supermarket). Put these in a pressure cooker with the butter and enough olive oil that the aubergine is lightly oiled but not soaked in it and as much thyme as you can be bothered with. Stir it all up, then pressure cook for 10 minutes once the pressure is up.

The result was basically perfect salty garlicky soft cooked aubergine. The pressure cooker basically fixed all the pathologies of cooking aubergine where there’s a complex dysfunctional middle ground between undercooked and burned.

Step 2 is simply to add the tomatoes and capers, stir and then pressure cook for another 5 minutes.

The result is a really nice garlicky sharp sauce a little reminiscent of puttanesca (it didn’t have olives, but they’d probably have been a good addition now that I think about it).

The basic starting point of this recipe was gnocchi, found while browsing the supermarket. I then added the aubergine, and everything else just build up around there.

Of course, now I’m using a subtly different algorithm for tonight’s dinner: The aubergine intermediate step was really good. What could I serve that with? (The answer BTW is that it’s going to be served with a quinoa done with dill, lemon and feta plus a side of fresh made guacamole. I’m pretty excited by this plan).

This entry was posted in Food, Open sourcing my brain on by .