Testing the installer for your python library

I’m quite proud of of Hypothesis’s test suite. I make zero claims of it being perfect, and it could obviously be better (though it does have 100% branch coverage, which is nice), but it’s good enough that it gives me a lot of confidence to make changes and believe it will be detected if they break something.

So, uh, it was pretty embarrassing that I shipped a version of hypothesis that couldn’t actually run.

I’d manually tested the installer for the 0.2.0 release but when pushing the 0.2.1 release I made a bit of a screw up and included some changes I hadn’t meant to and these broke the installer because they added some new packages without adding them to the setup.py. I forgot to manually test the installer because I thought the change was trivial and voila, one broken package on pypi.

This is of course the general problem with manual testing: You need to remember to do it.

So on the general philosophy that bugs are things that you should respond to by writing automated tests to ensure they don’t happen again, I’ve now got a test for Hypothesis’s installer.

It’s not especially sophisticated. All it does is create a clean virtualenv, do an sdist, install the resulting package in the virtualenv and then run some code to make  sure the library is working correctly. You can see it here (or here for the latest version). This is then run on travis, so any build without a working installer is a failed build and will not be shipped.

It’s entirely possible that this is actually something everyone does already and I just didn’t know about it (some Googling suggests not), but if it’s not then I highly recommend you do it for any python libraries you maintain. It was super easy to do and saves both time and potentially embarrassing mistakes.

This entry was posted in Hypothesis, Uncategorized on by .

3 thoughts on “Testing the installer for your python library

    1. david Post author

      Hmm. I think you may have misunderstood my point about manual testing? I was saying that that whole problem was that it was a manual test when it should have been an automated one, so we seem to be agreeing that the problem is not having good automation.

      However I really like your post! It has a lot of very good advice and I will likely implement a lot of it. Thanks for the link.

Comments are closed.