So in my previous post about Hypothesis 1.0 I stated the following priorities:
- A stable API
- Solid documentation
- The ability to get reports about a run out – number of examples tried, number rejected, etc.
- Coverage driven example discovery
Well, uh, I’m still planning on the first two (don’t worry, they’re hard requirements. The release date will slip before I compromise on them).
The second two though? Eh. They just don’t seem that important for a 1.0 release.
The reports would be nice to have. They’re definitely helpful, and they’re certainly important for avoiding useless tests, but I wouldn’t say they’re essential.
But more importantly, I know how to do them without breaking the current API.
After some thought I’ve realised that there are essentially two major categories of things that I need to be in the 1.0 release.
- Things that would require me to break the API
- Things that will get people excited enough about Hypothesis to use it
2 is pure marketing, but marketing is important here. My goal with Hypothesis is to make the world of Python a better tested environment, and if people aren’t actually using Hypothesis it will have failed at that goal, regardless of what an amazing piece of software it is.
And the 1.0 release is the big marketing opportunity. It’s not quite go big or go home, but it’s the point at which I’m throwing Hypothesis out to the world and saying “Here is my testing library. It’s awesome. You should use it”, and at that point it had better have a pretty compelling story for getting people to use it.
So those are the things I need for 1.0: excitement and stability.
And… well the coverage driven stuff is exciting to me, and it’s exciting to other quickcheck nerds, but honestly we’re not the target demographic here. Any quickcheck nerds who are writing Python should be just taking one look at Hypothesis and going “oh yes. It must be mine. Yes”, because honestly… the other quickcheck libraries for Python are terrible. I don’t like to trash talk (OK, I love to trash talk, but it’s uncouth), but if your quickcheck library doesn’t do example minimization then it’s a toy. Hypothesis started better than all the other quickcheck libraries, and it’s gone from strength to strength.
So coverage doesn’t break the API and doesn’t excite people, so it’s out. I’d like to do it at some point, but for now it’s just not important.
Reporting is useful, but it’s both API non-breaking and boring, so that’s out too.
So what’s in instead?
It’s a short list:
- Django integration
OK. That’s a slight lie, because it turns out there’s a feature I need for the Django integration (details why to come in another post). So the actual list:
- Bringing in the ideas from testmachine
- Django integration
And this is actually very compatible with both, because the testmachine ideas are among the ideas most likely to completely break the API. I… am not actually sure what I was thinking when trying to relegate them to a post 1.0 release – I have some ideas for how they could have been integrated without changing the API much, but I think the result would have been really painfully limited in comparison to what I could do.
So what will the django integration look like?
It will look like, if you will excuse me briefly violating my blog’s editorial guidelines, fucking magic is what it will look like.
You see, Django models come with everything Hypothesis needs to know how to generate them. They’re basically giant typed records of values. Hypothesis loves typed records of values. When this works (and I pretty much know how to make all the details work), the django integration is going to look like:
from hypothesis.extra.django import TestCase from hypothesis import given from myproject.mymodels import SomeModel from myproject.myforms import SomeForm class TestSomeStuff(TestCase): @given(SomeModel) def test_some_stuff(self, model_instance): ... @given(SomeForm) def test_some_other_stuff(self, form_instance): ....
That’s it. No fixtures, no annoying specifications of different edge cases for your model to be in, just ask Hypothesis for a model instance and it will say “sure. Here you go”. Given the upcoming fake factory integration it will even do it with nice data.
And, well, this may not be research level innovation on the theme of Quickcheck, but as far as compelling marketing messages for bringing property based testing to the masses I can’t really think of a better one than “Hey, all of you people using that popular web framework, would you like to do more testing with not only zero extra effort but you can also throw away a bunch of laborious boring stuff you’re already doing?”
So that’s the plan. I suspect this is going to cause a it of slippage in the timeline I’d originally intended but, well, I was also planning to be looking for jobs this month and I’ve mostly thrown myself into Hypothesis instead, so it’s not like I don’t have the time. I’m going to try to still be on track for an end of February release, but it’s no big deal if it ends up creeping into March.