# What is good code?

A long long time ago, in an office building a couple of miles away, I was asked this question in an interview. I’m not sure how much they cared about the answer, but mine was more glib than it was useful (I got the job anyway). I misquoted Einstein and said that good code was code that was as simple as possible but no simpler.

This was not a very satisfactory answer to me, though I got the job.

For, uh, reasons this question has been on my mind recently and I think I’ve come up with an answer that satisfies me:

Good code is code that I am unlikely to need to modify but easily could if I wanted to.

Of course, good code is probably only roughly correlated with good software.

This entry was posted in programming on by .

# Different types of overheads in software projects

I mentioned this concept on Twitter in a conversation with Miles Sabin and Eiríkr Åsheim last night, and it occurred to me I’ve never written up this idea. I call it my quadratic theory of software projects.

It’s one I originally formulated in the context of program languages, but I’ve since decided that that’s over-simplistic and really it’s more about the whole project of development. It probably even applies perfectly well to things that are not software, but I’m going to be focusing on the software case.

The idea is this: Consider two properties. Call them “effort” and “achievement”, say. If I wanted to attach a more concrete meaning to those, we could say that “effort” is the number of person hours you’ve put into the project and “achievement” is the number of person hours it could have taken an optimal team behaving optimally to get to this point, but the exact meaning of them doesn’t matter – I only mention to give you an idea of what I’m thinking of with these terms.

The idea is this: If you plot on a graph, with achievement on the x axis and the amount of effort it took you to get there on the y axis, what you will get is roughly quadratic.

This isn’t actually true, because often there will be back-tracking – the “oh shit that feature is wrong” bit where you do a bunch of work and then realise it wasn’t necessary. But I’m going to count that as achievement too: You developed some stuff, and you learned something from it.

It’s also probably not empirically true. Empirically the graph is likely to be way more complicated, with bits where it goes surprisingly fast and bits where it goes surprisingly slow, but the quadratic is a useful thought tool for thinking about this problem.

Well, a quadratic has three parts. We’ve got $$y = A + B x + C x^2$$. In my model, $$A, B, C \geq 0$$.

And in this context, each of those three parts have a specific meaning:

The constant component (A) is the overhead that you had to get started in the first place – planning, familiarising yourself with the toolchain, setting up servers, etc.

The linear factor (B) is how hard it is to actually make progress – for example, if you’re developing a web application in C++ there’s an awful lot of difficulty in performing basic operations, so this factor could be quite high. Other factors that might make it high are requiring a detailed planning phase for every line of code, requiring a 10:1 lines of test code to lines of application code, etc.

To use an overused word, the quadratic factor is essentially a function of the modularity of your work. In a highly modular code base where you can safely work on part of it without having any knowledge of most of the rest, the quadratic factor is likely to be very low (as long as these parts are well correlated with the bits you’re going to need to touch to make progress! If you’ve got a highly modular code base where in order to develop a simple feature you have to touch half the modules, you’re not winning).

There are also other things that can contribute to this quadratic factor. e.g. the amount that you have to take into account historical context: If a lot of the reasons why things are done is historical, then you have a linear amount of history you need to take into account to do new work. These all essentially work out as the same sort of thing though: The fraction of what you’ve already done you need to take into account in order to do new things.

So here’s the thing: Your approach to development of a project essentially determines these values. A lot of different aspects will influence them – who your team members are, what language you’re working in, how many of you there are, how you communicate, whether you’re doing pair programming, whether you’re doing test driven development, how you’re doing planning, etc and etc. Almost everything you could call “development methodology” factors in somehow.

And if you compare two development methodologies you’d find in active use for a given problem, it’s going to be pretty rare that one of them is going to do better (i.e. have a lower value) on all three of these coefficients: Some approaches might have a lower constant and linear overhead but a remarkably high quadratic overhead, some approaches might have a very high constant overhead but a rather low linear and quadratic, etc. Generally speaking, something which is worse on all three is so obviously worse that people will just stop doing it and move on.

So what you end up doing is picking a methodology based on which constants you think are important. This can be good or bad.

The good way to do this is to look at your project size and pick some constants that make sense for you. For small projects, the constant costs will dominate, for medium projects the linear costs will dominate, and large projects the quadratic costs will dominate. So if you know your project is just a quick experiment, it makes sense to pick something with low linear and constant costs and high quadratic costs, because you’re going to throw it away later (of course, if you don’t throw it away later you’re going to suffer for that). If you know your project is going to be last a while, it makes sense to front-load on the constant costs if you you can reduce the quadratic cost. In between, you can trade these off against eachother at different rates – maybe gain a bit on linear costs by increasing the quadratic cost slightly, etc.

The bad way to do it is to automatically discount some of these as unimportant. If you just ignore the quadratic cost as not a thing you will find that your projects get mysteriously bogged down once you hit a certain point. If you’re too impatient to pay constant costs and just leap in and start hacking, you may find that the people who sat down and thought about it for a couple hours first end up sailing past you. If you think that scalability and the long-term stability of the project is all that matters then people who decided that day to day productivity also mattered will probably be years ahead of you.

Sadly, I think the bad way to do it is by far the more common. I’ll grant it’s hard to predict the future of a project, and that the relationship between methodologies and these values can be highly opaque, which can make this trade-off hard to analyse, but I think things would go a lot better if people at least tried.

This entry was posted in programming, rambling nonsense on by .

Peter Seibel wrote a piece about code reading recently. It’s a good piece which meshes well with my experience of code reading, and it got me thinking about how I do it.

I think there are three basic tenets of my code reading approach:

1. The goal of code reading is to learn how to modify the code. Sure your ultimate goal might be to understand the code in some abstract sense (e.g. because if you want to use the ideas elsewhere), but ultimately code you don’t know how to modify is code you probably don’t understand as well as you think you do, and code you do know how to modify is code that you probably understand better than if you’d merely set out to understand it.
2. The meaning of code is inextricably tied with its execution. In order to understand code you need to be able to follow its execution. You can do a certain amount of this in your head by manually tracing through things (and you will need to be able to), but you have a machine sitting in front of you designed to execute this code and you should be using it for that. For languages with a decent debugger, you even have a machine sitting in front of you which will execute the code and show you its working. For languages without a decent debugger (or setups where it’s hard to use one), you can still get a hell of a lot of mileage out of the humble print statement.
3. Ask many small questions. Ignore everything you do not need to answer the current question.

Many people completely rewrite code in order to understand it. This is an extreme form of learning to modify it – modification through rewriting. Sometimes this is fine – especially for small bits of code – but it’s pretty inefficient and isn’t going to be much help to you once you get above a few hundred lines. Most code bases you’ll need to read are more than a few hundred lines.

What you really want to be doing is learning through changing a couple lines at a time, because then what you are learning is which lines to change to achieve specific effects.

An extremely good tool for learning this is fixing bugs. They’re almost the optimal small question to ask: Something is wrong. What? How do I fix it? You need to read enough of the code to eliminate possibilities and find out where things are actually wrong, and you’ve got a sufficiently specific goal that you shouldn’t get too distracted by bits you don’t need.

If you don’t have that, here are some other small questions you might find useful to ask:

1. How do I run  this code?
2. How do I write a test for this code? This doesn’t necessarily have to be in some fancy testing framework (though it’s often nice if it is!). It can just be a script or other small program you can run which will tell you if something goes wrong.
3. Pick a random line in the codebase. It doesn’t have to be uniformly at random – a good algorithm might be to pick one half of a branch in a largish function in a module you’re interested in. How do I get that line to execute? Stick an assert false in there to make sure the answer is right. If there’s a test suite with low coverage, try finding an uncovered line and writing a test which executes it.
4. Pick a branch. What will happen if I invert this branch?
5. Pick a constant somewhere. If I wanted to make this configurable at runtime, what would I need to do?
6. Specific variations on “How can I break this code?”. e.g. in C “Can I get this code to read from/write to an invalid address?” is often a useful question. In web applications “Can I cause a SQL/XSS/other injection attack?”  is one. This forces you to figure out how data flows to the system through various endpoints, and then if you succeed in finding such a bug then you get to figure out how to fix it.
7. How can I write a test to verify this belief I have about the code?
8. What would I need to change to break this test?
This entry was posted in programming on by .

# How learning Scala made me a better programmer

People will often tell you how learning functional programming / C / their current favourite language paradigm will make you a better programmer even if you don’t actually use it.

I mean, yes, I suppose this is true. It’s very hard for learning new things about programming to not teach you something general purpose, and if a language is unfamiliar enough that it’s not just a syntax change you’re almost certainly going to learn new ways of thinking along with it. I don’t know that it’s the most efficient way to become a better programmer, but it’s definitely not a bad one.

Scala, though, while there are a bunch of neat features which I still miss and occasionally emulate badly, I can point to a much more concrete and useful way in which it made me a better programmer. Unfortunately a) I don’t think it’s likely to work any more and b) You’re not going to like it.

When I was learning Scala back in… 2006 I think it might have been? Certainly early 2007 at the latest. Anyway, sure I was learning interesting things about static typing, object orientation and functional programming (maybe not much of the last one. I already knew Haskell and ML tolerably well), what I actually learned the most about was from an entirely different feature of the language.

Specifically that the compiler was ridiculously buggy.

I assume things are much better than they used to be these days, and I’m sure my memory is exaggerating how bad they were at the time, but it feels like back then the development cycle was compile, run tests, report compiler bug. I think I probably hit more compiler bugs than most people – I’m not sure why; maybe I just had a tendency to push at the edge cases. It could also be that I just was nicer about reporting them than most people, I don’t know. Either way, there was a reasonable period of time when I had the top reported number of bugs on the tracker (I think Paul Phillips was the one who ousted me. Certainly once he came along he blew my track record out of the water).

Back in the dim and distant past when I wrote “No seriously, why Scala?” someone asked the question “Why isn’t a buggy compiler a show stopper?”. There were some pretty good answers, but really the accurate one is just “Well… because it’s not?”. You pretty quickly learn to get used to a buggy compiler and integrate its bugginess into its work flow – not that you come to depend on it, but it just becomes one of those things you know you have to deal with and compensate for. It potentially slows you down, but as long as you have good habits to prevent it tripping you up and the rest of the language speeds you up enough to compensate this isn’t a major problem.

I’m not saying I’d actively seek out a buggy compiler today. Obviously if you can choose to not be slowed down this is better than being slowed down, and no matter how good your procedures for compensating for it eventually you’re going to be hit by a compiler bug in production. If I even need to say it, there are clearly downsides to writing production code with a buggy compiler.

But from the point of view of my development as a programmer it was amazing.

Why?

Well, partly just because being good at writing a decent bug report is a nice skill to have to endear you to a maintainer and this is where I learned a lot of those skills (though it took me being on the wrong end of bad bug reports to properly internalise them).

But mostly I think because just it was really useful having that much practice finding and submitting bugs. In much the same way that we spend more time reading than writing code, we also spend more time finding bugs than writing bugs. Having lots of practice at this turns out to be super helpful.

Compiler bugs have an interesting character to them. Most good advice about debugging advises you to not assume that the bug you’re experiencing is in the compiler, or even your system libraries, and that’s in large part because it indeed rarely is, so you tend not to experience this character too often.

What is that character?

It’s simple: You cannot trust the code you have written. You cannot deduce the bug by reading through the code until you get a wrong answer. The code is lying to you, because it doesn’t do what you think it does.

To an extent this is always true. Your brain does not include a perfect implementation of the language spec and the implementation of the all the libraries, so the code is always capable of doing something different than you think it does, but with compiler bugs you don’t even have the approximation to the truth you normally rely on.

“Code is data” is a bit of a truism these days, but with a compiler bug it’s actually true. Your code is not code, it’s simply the input to another program that is producing the wrong result. You are no longer debugging your code, you are manipulating your code to debug something else.

This is interesting because it forces you to think about your code in a different way. It forces you to treat it as an object to be manipulated instead of a tool you are using to manipulate. It gives you a fresh perspective on it, and one that can be helpful.

How do you debug a compiler bug? Well. Sometimes it’s obvious and you just go “Oh, hey, this bit of code is being compiled wrong”. You copy and paste it out, create a fresh test case and tinker with it a little bit and you’re done.

This will probably not happen for your first ten compiler bugs.

Fortunately there’s a mechanical procedure for doing this. For some languages there’s even literally a program to implement this mechanical procedure. I find it quite instructive to do it manually, but that’s what people always say about tedious tasks that are on the cusp of being automated. Still, I’m glad to have done it.

What is this mechanical procedure?

Simple. You create a test case for the thing that’s going wrong (this might just be “try to compile your code” or it might be “run this program that produced wrong output”. For the sake of minimalism I prefer not to use a unit testing framework here). You check out a fresh branch (you don’t have to do this in source control but there’s going to be a lot of undo/redo so you probably want to). You now start deleting code like it’s going out of fashion.

The goal is to simply delete as much code as possible while still preserving the compiler bug. You can start with crude deletion of files, then functions, etc. You’ll have to patch up the stuff that depends on it usually, but often you can just delete that too.

The cycle goes:

1. Delete
2. Run test
3. If the test still demonstrates the problem, hurrah. Go back to step 1 and delete some more.
4. If the test no longer demonstrates the problem, that’s interesting. Note this code as probably relevant, undelete it, and go delete something else.

Basically keep iterating this until you can no longer make progress.

When you stop making progress you can either go “Welp, that’s small enough” (generally I regard small enough to be a single file under a few hundred lines, ideally less) or you can try some other things. e.g.

1. Inlining imported files
2. Inlining functions
3. Manually constant folding arguments to functions (i.e. if we only ever call f(x, y, 1) in a program, remove the last argument to f and replace it with 1 in the function body)

The point being that you’re thinking in terms of operations on your code which are likely to preserve the bug. Finding those operations will help you understand the bug and guide your minimization process. Then at the end of it you will have a small program demonstrating it which is hopefully small enough that the bug is obvious.

Is this how I normally debug programs? No way. It’s a seriously heavyweight tool for most bugs. Most bugs are shallow and can be solved with about 10 minutes of reading the code until it’s obvious why it’s wrong.

Even more complicated bugs I tend not to break this out for. What I do take from this though is the lesson that you can transform your program to make the bug more obvious.

Sometimes though, when all else fails, this is the technique I pull out. I can think of maybe half a dozen times I’ve done it for something that wasn’t a compiler bug, and it’s been incredibly useful each time.

All those half dozen times were for a very specific class of bug. That specific class of bug being “ruby”. There’s a hilarious thing that can happen in ruby where namespacing isn’t really the cool thing to do and everything messes around with everyone else’s internal implementation. This potentially leads to some really bizarre spooky interaction at a distance (often involving JSON libraries. *shakes fist*). This technique proved immensely invaluable getting to the bottom of this. For example the time I discovered that if your require order was not exactly right, having a JSON column in datamapper would cause everything using JSON to break. That was fun.

But even when I’m not explicitly using these techniques, it feels like a lot of my debugging style was learned in the trenches of the Scala compiler. There’s a certain ramp up when debugging where you start with intuition and pile on increasingly methodical techniques as the bugs get more mysterious, and I think exposure to a lot of really mysterious bugs helped me learn that much earlier in my career than I otherwise would have.

It’s possible that the fact that it was a compiler was irrelevant. It feels relevant, but maybe I’d have learned variations on the technique at any point when I was daily interacting with a large, buggy piece of software. But to date, the 2007-2008 era Scala compiler is the best example I’ve had of working with such, so it’s entirely possible I’d have never learned that skill otherwise, and that would have been a shame because it’s one that’s served me well on many other projects.

This entry was posted in programming on by .

# A case study in bad error messages

Consider the Python.

I’ve been writing a lot of it recently. It’s mostly quite nice, but there are some quirks I rather dislike.

You may have noticed, but I have strong opinions on error reporting.

Python doesn’t do so well on this front. This is sad given that it really loves its exceptions.

An example:

>>> float([]) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: float() argument must be a string or a number

This error message has the lovely property of being both completely unhelpful and a lie.

It is unhelpful because it does not give you any information at all (not even a type) about the value you tried to convert to a float.

It is a lie because in fact all manner of things can be converted to floats:

>>> class Foo(object): ... def __float__(self): ... return 42.0 ... >>> Foo() <__main__.Foo object at 0x27ac910> >>> float(Foo()) 42.0

I wonder how we could better design this message to mislead? I’m drawing a blank here.

This entry was posted in programming on by .