Epistemic status: Still thinking this through. This is a collection of thoughts, not an advocacy piece.
I’ve previously been pretty against TDD. It is possible that this has always been based on a straw understanding of what TDD is supposed to be for, but if so that is a straw understanding shared by a number of people who have tried to sell me on it.
I am currently moving towards a more nuanced position of “I still don’t think TDD is especially useful in most cases but there are some cases where it’s really amazingly helpful”.
Part of the source of my dislike of TDD has I think come from underlying philosophical differences. A test suite has two major purposes:
- It helps you prevent bugs in your code
- It acts as executable documentation for your code
As far as I am concerned, the important one is the first. People who think their test suite is a good substitute for documentation are wrong. Your code is not self-documenting. If you haven’t written actual for reals documentation, your code is not documented no matter how good your test suite is.
And my belief has always been and remains that TDD is actively harmful for using a test suite as the first purpose. Good testing is adversarial, and the number one obstacle to good testing (other than “not testing in the first place”) is encoding the same assumptions in your tests as in your code. TDD couples writing the tests and the code so closely that you can’t help but encode the same assumptions in them, even if it forces you to think about those assumptions more clearly.
I am aware of the counter-argument that TDD is good because it ensures your code is better tested than it otherwise would be. I consider this to be true but irrelevant, because mandating 100% coverage has the same property but forces you to maintain a significantly higher standard of testing.
So if TDD is harmful for the purpose of testing that matters, it must be at best useless and at worst harmful, right?
As far as I’m concerned, right. If your goal is a well tested code base, TDD is not a tool that I believe will help you get there. Use coverage instead.
But it turns out that there are benefits to TDD that have absolutely nothing to do with testing. If you think of TDD as a tool of thought for design which has absolutely nothing to do with testing whatsoever then it can be quite useful. You then still have to ensure your code is well tested, but as long as you don’t pretend that TDD gets you there, there’s nothing stopping you from using it along the way.
Using tests to drive the design of your API lets you treat the computer as an external brain, and provides you with a tool of thought that forces you to think about how people will use your code and design it accordingly.
The way I arrived at finally realising this is via two related design tools I have recently been finding very useful:
- Writing documentation and fixing the bits that embarrassed you when you had to explain them
- Making liberal use of aspirational examples. Starting a design from “Wouldn’t it be great if this worked?” and see if you can make it work.
TDD turns out to be a great way of combining both of these things in an executable (and thus checkable) format:
- The role of tests as executable documentation may not actually be a valid substitute for documentation, but it happily fills the same niche in terms of making you embarrassed when your API is terrible by forcing you to look at how people will use it.
- A test is literally an executable aspirational example. You start from “Wouldn’t it be great if this test passed?” and then write code to make the test pass.
When designing new segments of API where I’ve got the details roughly together in my head but am not quite clear on the specifics of how they should all fit together or how this should work, I’ve found using tests for this can be very clarifying, and this results in a workflow that looks close to, but not exactly like, classic TDD.
The workflow in question is as follows:
As per classic TDD, work is centered around features. For example, if I was designing a database API, the following might be features:
- I can connect to the database
- I can close a connection to the database
- I can create tables in the database
- I can insert data into the database
- I can select data from the database
Most of these are likely to be a single function. Some of them are probably two or three. The point is that as with classic TDD you’re focusing on features not functions. I think this is a bit more coarse grained than advocated by TDD, but I’ve no idea how TDD as she is spoken differs from TDD as described.
Working on an individual feature involves the following:
- Start from code. All types and functions you think you’ll need for this stage are defined. Functions should all raise some error. InvalidArgument or similar is a good one, but any fatal condition you can reasonably expect to happen when calling that function is fine. If there is really no possible way a function could raise an exception, return some default value like 0, “” or None.
- Write lots of tests, not just one, for all the cases where those functions should be failing. Most of these tests should pass because e.g. they’re asserting that your function raises an invalid argument when you don’t specify a database to connect to. Your function considers all arguments to be invalid, so this test is fine!
- Any tests for error conditions that do not currently pass, modify the code to make them pass. This may require you to flesh out some of your types so as to have actual data.
- Now write some tests that shouldn’t error. Again, cover a reasonable range of cases. The goal is to sketch out a whole bunch of example uses of your API.
- Now develop until those tests pass. Any edge cases you spot along the way should immediately get their own test.
- Now take a long hard look at the tests for which bits of the API usage are annoying and clunky. Improve the API until it does not embarrass you. This may and probably will require you to revise earlier stages as well and that’s fine.
- If you’re feeling virtuous (I’m often not and leave this to the end) run coverage now and add more tests until you reach 100%. You may find this requires you to change the API and return to step 5.
Apply this to each stage in turn, then apply a final pass of steps 6 and 7 to the thing as a whole.
This isn’t very different from a classic TDD work flow. I think the units are more coarsely grained, and the emphasis on testing error conditions first means that you tend to start with tests which are passing and act as constraints that they should stay passing rather than tests that are failing and which act as drivers to make them pass, but it’s say no more than a standard deviation out from what I would expect normal TDD practice to look like.
The emphasis on error conditions is a personal idiosyncrasy. Where by personal idiosyncrasy I mean that I am entirely correct, everyone else is entirely wrong, and for the love of god people, think about your error conditions, please. Starting from a point of “Where should this break?” forces you to think about the edge cases in your design up front, and acts as something of a counterbalance to imagining only the virtuous path and missing bugs that happen when people stray slightly off it as a result.
So far this approach has proven quite helpful for the cases I’ve used it. I’m definitely not using this for all development, and I wouldn’t want to, but it’s been quite helpful where I need to design a new API from scratch and the requirements are vague enough that it helps to have a tool to help me think tem through.