Category Archives: Uncategorized

The three stage value pipeline for Hypothesis generation

Note: This describes a work in progress rather than something that is in a released Hypothesis version. As such it’s liable to change quite a lot before it’s done. I’m partly writing this as a form of thinking through the design, partly as a way of procrastinating from the fact that I’ve literally broken the entire test suite in the course of moving to this and trying to fix it is really making me wish I’d written Hypothesis in Haskell.

I’ve previously talked about how generating data in Hypothesis is a two step process: Generate a random parameter, then for a given parameter value generate a random value.

I’ve introduced a third step to the process because I thought that wasn’t complicated enough. The process now goes:

  1. Generate a parameter
  2. From a parameter generate a template
  3. Reify that template into a value.

Why?

The idea is that rather than working with values which might have state we instead always work with merely the information that could be used to construct the values. The specific impetus for this change was the Django integration, but it also allows me to unify two features of Hypothesis that… honestly it would never have even occurred to me to unify.

In order to motivate this, allow me to present some examples:

  1. When generating say, a list of integers, we want to be able to pass it to user provided functions without worrying about it being mutated. How do we do this?
  2. If we have a strategy that produces things for one_of((integers_in_range(1, 10), (9, 20))) and we have found a counter-example of 10, how do we determine which strategy to pass it to for simplification?
  3. How does one simplify Django models we’ve previously generated once we’ve rolled back the database?

Previously my answers to these would have been respectively:

  1. We copy it
  2. We have a method which says whether a strategy could have produced a value and arbitrarily pick one that could have produced it
  3. Oh god I don’t know can I have testmachine back?

But the templates provide a solution to all three problems! The new answers are:

  1. Each time we reify the template it produces a fresh list
  2. A template for a one_of strategy encodes which strategy the value came from and we use that one
  3. We instantiate the template for the Django model outside the transaction and reify it inside the transaction.

A key point in order for part 3 to work is that simplification happens on templates, not values. In general most things that would previously have happened to values now happen to templates. The only point at which we actually need values is when we actually want to run the test.

As I said, this is still all a bit in pieces on the floor, but I think it’s a big improvement in the design. I just wish I’d thought of it earlier so I didn’t have to fix all this code that’s now broken by the change.

(Users who do not have their own SearchStrategy implementations will be mostly unaffected. Users who do have their own SearchStrategy implementations will probably suffer quite a lot in this next release. Sorry. That’s why it says 0.x in the version)

This entry was posted in Hypothesis, Uncategorized on by .

What are developer adjacent skills?

I idly mused on Twitter about how one would go about teaching essential developer adjacent skills. I’m still not sure, but I thought I would elaborate on what it is I actually mean by a developer adjacent skill.

More generally, I’d like to elaborate what I mean by a job adjacent skill. There are sales adjacent skills, journalist adjacent skills, designer adjacent skills, etc. I’m only talking about developer adjacent skills because I am a developer and thus that’s where most of my expertise in the subject lies.

When I say a job adjacent skill what I mean is any skill that:

  1. Would improve your ability to interact with people doing that job in their capacity as someone who does that job
  2. Would probably be much less useful if you didn’t have to interact with anyone doing that job
  3. Would not require you to basically have to learn to do the job in order to acquire the skill

Examples:

  • Learning to code is not a developer adjacent skill because once you’ve learned to code sufficiently well you’re basically able to be a developer (You might not be able to be a very good developer yet, but you could almost certainly get a job as one).
  • Learning to write better emails is not a developer adjacent skill because it’s a generally useful skill – almost every job will be improved by this skill, not just ones that require interaction with developers.
  • Learning to write better bug reports is a developer adjacent skill, because usually you’re sending bug reports to developers rather than other people (either directly or indirectly), and it will make your interactions with those developers much better.

Usually skills adjacent to your job need not be ones that you need to do your job directly – for example you can happily code away without ever learning how to file a bug report (far too many people do) – but most jobs require interacting with other people doing the same job. You may have coworkers, you may have a professional community, etc. So  most job adjacent skills are ones that you should also try to pick up if you do the job itself.

There are exceptions. Some job adjacent skills that you might benefit from are specific to the type of job you have. For example, I’m a backend developer. I have to work with frontend developers, and this means that it would be useful for me to acquire frontend developer adjacent skills. There is some overlap, but not all of these are the same frontend developer adjacent skills that a sales person would need to acquire.

One simple example of a skill that I should acquire is how to launch the app in a way that compiles all the CSS, etc, and to understand the buildchain enough that I don’t have to go bother someone and say “Help what’s a grunt it’s telling me it can’t find a grunt” when something doesn’t work. This is a front-end developer adjacent skill that the front-end developers should also have but that sales people probably don’t need to care about.

Another example of these is that I should know how to design an API in a way that cleanly exposes the data in a way that front-end developers can easily make use of. This is a front-end developer adjacent skill that the front-end developers don’t need to have – it’s not their job, it’s mine. They need to have skills around how to use an API and how to clearly convey their requirements, but building it isn’t their thing and it doesn’t need to be. Sales people are unlikely to care about this one either.

But the sales people should learn how to talk to the front-end people about user needs (rather than feature requests), and have a whole pile of other interactions with the front-end developers that need improving, ones that I as a back-end developer will probably never need to have.

So some job adjacent skills are quite specific, but I think the majority of them are ones that are generally useful in almost any role that interacts with that job.

Here are some examples of things I think of as good general purpose developer adjacent skills:

  • The aforementioned how to write a good bug report
  • How to find bugs in an application
  • How to behave in a sprint planning meeting or local equivalent
  • Understanding what software security is and why it’s important

There are doubtless many more. Those are just the ones I can think of off the top of my head.

Should you acquire developer adjacent skills?

Well, do you interact with developers? If not, then no you probably shouldn’t.

If you do interact with developers then yes, yes you should. And the developers should in turn acquire the skills adjacent to your job.

In general I think it’s important to acquire the adjacent skills of any job you routinely interact with. If not, you are basically making them carry the weight of your refusal to learn – if you don’t learn to write a decent bug report then your bugs are less likely to get fixed and will take up more of my time in which I could be working on other things, if I don’t learn to compile the CSS and launch the app I’ll be bugging you with questions every time I need to do that and taking away your time in which you could be working on other things.

This can be hard. There are a lot of jobs, and as a result a lot of adjacent skills that you might need to pick up. No one is going to be perfect at this.

I think the important thing is to bear all of this in mind. Whenever you interact with someone with a different job and the interaction is less productive than it could be, think of it as a learning opportunity – there are probably adjacent skills on both sides that are missing. Rather than getting frustrated at the lack of them on the other side, try to teach them, and in turn try to encourage them to teach you. Hopefully the next interaction will be less frustrating.

This entry was posted in Uncategorized on by .

What is the testmachine?

It turns out you can be told and don’t have to experience it for yourself. However it also turns out that in the intervening year and a bit since I wrote this code, I’d mostly forgotten how it worked, so I thought I would complete a long ago promise and document its behaviour as a way to help me remember.

First, some history

Around the beginning of 2014 I was working on a C project called intmap. It was more or less a C implementation of Chris Okasaki and Andy Gill, “Fast Mergeable Integer Maps”, the basis for Haskell’s intmap type, with an interesting reference counting scheme borrowed from jq internals. It also has some stuff where it takes advantage of lazy evaluation to optimize expressions (e.g. if you do a bunch of expensive operations and then intersect with an empty set you can throw away the expensive operations without ever computing them). I had some vague intention of trying to use this to give jq a better table and array implementation (it has immutable types, but internally they’re really copy on write), but I never actually got around to that and was mostly treating it as a fun hack project.

The internals of the project are quite fiddly in places. There are a bunch of low level optimisations where it takes advantage of the the reference counting semantics to go “Ah, I own this value, I can totally mutate it instead of returning a new one”, and even without that the core algorithms are a bit tricky to get right. So I needed lots of tests for correctness.

I wrote a bunch, but I wasn’t really very confident that I had written enough – both because I had coverage telling me I hadn’t written enough and because even if I had 100% coverage I didn’t really believe I was testing all the possible interactions.

So I turned to my old friend, randomized testing. Only it wasn’t just enough to generate random data to feed to simple properties, I wanted to generate random programs. I knew this was possible because I’d had quite good luck with Hypothesis and Scalacheck’s stateful testing, but this wasn’t a stateful system, so what to do?

Well the answer was simple: Turn it into a stateful system.

Thus was born the baby version of testmachine. Its core was a stack of intmaps mapping to string values. It generated random stack operations: Pushing singleton maps onto the stack, performing binary operations on the top two elements of the stack, rotations, deletions, copies, etc. It also remembered what keys and values it had previously used and sometimes reused them – testing deletion isn’t very interesting if the key you’re trying to delete is in the map with probability ~0.

It then had a model version of what should be on the stack written in python and generated assertions by comparing the two. So it would generate a random stack program, run the stack program, and if the results differed between the model and the reality, it would minimize the stack into a smaller program and spit out a corresponding C test case. It also did some shenanigans with forking first so it could recover from segfaults and assertion failures and minimize things that produced those too.

The initial versions of the C test cases explicitly ran the stack machine, but this was ugly and hard to reason about, so I wrote a small compiler that turned the stack machine into SSA (this is much easier than it sounds because there are no non-trivial control structures in the language, so a simple algorithm where you just maintain a single stack of variable labels in parallel was entirely sufficient for doing so). It then spat out the C program corresponding to this SSA representation with no explicit stack. You can see some of the generated tests here.

At this point I looked at what I had wrought for testing intmap and concluded this was much more exciting than intmap itself and started to figure out how to generalise it. And from that was born testmachine.

I experimented with it for a while, and it was looking really exciting, but then uh 2014 proper happened and all my side projects got dropped on the floor.

Then end of 2014 happened and it looked like Hypothesis was where the smart money was, so I picked it up again and left testmachine mostly abandoned.

And finally I realised that actually what Hypothesis really needed for some of the things I wanted to do was the ideas from testmachine. So soon it will live again.

What’s the relation between testmachine and Hypothesis?

At the time of this writing, there is no relation between testmachine and Hypothesis except that they are both randomized testing libraries written in Python by me.

Originally I was considering testmachine as the beginnings of a successor to Hypothesis, but I about faced on that and now I’m considering it a prototype that will inspire some new features in Hypothesis. As per my post on the new plans for Hypothesis 1.0, the concepts presented in this post should soon (it might take ~2 weeks) be making it into a version of Hypothesis near you. The result should be strictly better than either existing Hypothesis or testmachine – a lot of new things become possible that weren’t previously possible in Hypothesis, a lot of nice things like value simplification, parametrized data generation and an example database that weren’t present in testmachine become available.

So what is testmachine?

Testmachine is a form of randomized testing which rather than generating data generates whole programs, by combining sequences of operations until it finds one that fails. It doesn’t really distinguish assertions from data transforming operations – any operation can fail, and any failing operation triggers a minimizing example.

To cycle back to our intmap example: We have values which are essentially a pair (intmap, mirror_intmap), where mirror_intmap is a python dictionary of ints to strings, and intmap is our real intmap type. We can define a union operation which performs a union on each and fails if the resulting mirror does correspond to the resulting map.

There are approximately three types of testmachine operation:

  1. Internal data shuffling operations
  2. Directly generate some values
  3. Take 1 or more previously generated values and either fail or generate 0 or more new values

(You can unify the latter two but it turns out to be slightly more convenient not to)

TestMachine generates a valid sequence of operations of a fixed size (200 operations by default). It then runs it. If it fails it finds a minimalish (it can’t be shrunk by deleting fewer than two instructions and still get a failing test case) subsequence of the program that also fails.

The internal representation for this is a collection of named stacks, usually corresponding to types of variables (e.g. you might have a stack for strings, a stack for ints, and a stack for intmaps). An operation may push, pop, or read data from the stacks, or it may just shuffle them about a bit. Testmachine operations are then just instructions on this multi-stack machine.

Simple, right?

Well… there are a few complications.

The first is that they’re not actually stacks. They’re what I imaginatively named varstacks. A varstack is just a stack of values paired with integer labels. Whenever a variable is pushed onto the stack it gets a fresh label. When you perform a dup operation on the stack (pushing the head of the stack onto it a second time) or similar the variable label comes along with it for free.

This is important both for the later compilation stage and it also adds an additional useful operation: Because we can track which values on the stack are “really the same” we can invalidate them all at once. This allows us to implement things like “delete” operations which mark all instances of a value as subsequently invalid. In the intmap case this is an actual delete and free the memory operation (ok, it’s actually a “decrement the reference count and if it hits zero delete and free some memory”, but you’re supposed to pretend it’s a real delete). It could also be e.g. closing a collection, or deleting some object from the database, or similar.

The variable structure of the varstack also lets us define one of the most important invariants that lets testmachine do its work: An operation must have a consistent stack effect that depends only on the variable structure of the stacks. Further an operation’s validity must depend only on the variable structure of the stacks (whether or not it fails obviously depends on the values on the stack. The point is that it can’t have been invalid to execute that operation in the first place depending on the available data).

This is a bit restrictive, and definitely eliminates some operations you might want to perform – for example “push all the values from this list onto that stack” – but in practice it seems to work well for the type of things that you actually want to test.

The reason it’s important is that it allows you to decouple program generation from program execution. This is desirable because it lets you do things like forking before executing the program to protect against native code, but it also just allows a much cleaner semantics for the behaviour of the library.

In fact the actual invariant that testmachine maintains is even more restrictive. All testmachine operations have one of the following stack effects:

  1. They operate on a single stack and do not change the set of available variables on that stack (but may cause the number of times a variable appears on the stack to change)
  2. They perform a read argspec operation, followed by a series of push operations

What is a read argspec operation?

An argspec (argument specifier. I’m really not very good at names) is a tuple of pairs (stack name, bool), where the bool is a flag that indicates whether that value is consumed.

The interpretation of an argspec like e.g. ((intmaps, True), (ints, False), (strings, False)) is that it reads the top intmap, int and string from each of their stacks, and then invalidates all intmaps with that label. An argspec like ((intmaps, True), (intmaps, True)) reads and invalidates the top two intmaps. An operation like ((ints, False), (ints, False)) reads the to two ints and doesn’t invalidate them.

It’s a little under-specified what happens when you have something like ((intmaps, True), (intmaps, False)). Assume something sensible and consistent happens that if I’d continued with the project I’d totally have pinned down.

The reason for this super restrictive set of operations is a) It turned out to be enough for everything I cared about and b) It made the compilation process and thus the resulting program generation much cleaner. Every operation is either invisible in the compiled code or can be written as r1, …, rn = some_function(t1, …, tn).

So we have the testmachine (the collection of varstacks), and we have a set of permitted operations on it. How do we go from that to a program?

And the answer is that we then introduce the final concept: That of a testmachine language.

A language is simply any function that takes a random number generator and the variable structure of a testmachine program and produces an operation that would be valid given that variable structure.

Note that a language does not have to have a consistent stack effect, only the operations it generates – it’s possible (indeed, basically essential) for a single language to generate operations with a wide variety of different stack effects on a wide variety of different stacks.

So we have a testmachine and a language, and that’s enough to go on.

We now create a simulation testmachine. We repeatedly ask the language to generate an operation, simulate the result of that operation (which we can do because we know the stack effect) by just putting None values in everywhere we’d actually run the operation, and iterate this process until we have a long enough program. We then run the program and see what happens. If it fails, great! We have a failing test case. Minimize that and spit it out as output. If not, start again from scratch until you’ve tried enough examples.

And that’s pretty much it, modulo a bunch of details around representation and forking that aren’t really relevant to the interesting parts of the algorithm.

This is strictly more general than quickcheck, or Hypothesis or Scalacheck’s stateful testing: You can represent a simple quickcheck program as a language that pushes values onto a stack an an operation that reads values from that stack and fails if the desired property isn’t true.

With a little bit of work you can even make it just better at Quickcheck style.

Suppose you want to test the property that for all lists of integers xs, after xs.remove(y), y not in xs. This property is false because remove only removes the first element.

So in Hypothesis you could write the test:

@given([int], int)
def test_not_present_after_remove(xs, y):
    try:
        xs.remove(y)
    except ValueError:
        pass
 
    assert y not in xs

And this would indeed fail.

But this would probably pass:

@given([int], int)
def test_not_present_after_remove(xs, y):
    try:
        xs.remove(y)
        xs.remove(y)
    except ValueError:
        pass
 
    assert y not in xs

Because it only passes due to some special cases for being able to generate low entropy integers, and it’s hard to strike a balance between being low entropy enough to generate a list containing the same value three times and being high enough entropy enough to generate an interesting range of edge cases.

Testmachine though? No sweat. Because the advantage of testmachine is that generation occurs knowing what values you’ve already generated.

You could produce a testmachine with stacks ints and intlists, and a language which can generate ints, generate lists of ints from the ints already generated, and can perform the above test, and it will have very little difficulty falsifying it, because it has a high chance of sharing values between examples.

The future

Testmachine is basically ideal for testing things like ORM code where there’s a mix of returning values and mutating the global state of the database, which is what caused me to make the decision to bring it forward and include it in (hopefully) the next version of Hypothesis. It’s not going to be an entirely easy fit, as there are a bunch of mismatches between how the two work, but I think it will be worth it. As well as allowing for a much more powerful form of testing it will also make the quality of Hypothesis example generation go up, while in turn Hypothesis will improve on testmachine by improving the quality of the initial data generation and allowing simplification of the generated values.

This entry was posted in Uncategorized on by .

Revised Hypothesis 1.0 plan

So in my previous post about Hypothesis 1.0 I stated the following priorities:

  1. A stable API
  2. Solid documentation
  3. The ability to get reports about a run out – number of examples tried, number rejected, etc.
  4. Coverage driven example discovery

Well, uh, I’m still planning on the first two (don’t worry, they’re hard requirements. The release date will slip before I compromise on them).

The second two though? Eh. They just don’t seem that important for a 1.0 release.

The reports would be nice to have. They’re definitely helpful, and they’re certainly important for avoiding useless tests, but I wouldn’t say they’re essential.

But more importantly, I know how to do them without breaking the current API.

After some thought I’ve realised that there are essentially two major categories of things that I need to be in the 1.0 release.

  1. Things that would require me to break the API
  2. Things that will get people excited enough about Hypothesis to use it

2 is pure marketing, but marketing is important here. My goal with Hypothesis is to make the world of Python a better tested environment, and if people aren’t actually using Hypothesis it will have failed at that goal, regardless of what an amazing piece of software it is.

And the 1.0 release is the big marketing opportunity. It’s not quite go big or go home, but it’s the point at which I’m throwing Hypothesis out to the world and saying “Here is my testing library. It’s awesome. You should use it”, and at that point it had better have a pretty compelling story for getting people to use it.

So those are the things I need for 1.0: excitement and stability.

And… well the coverage driven stuff is exciting to me, and it’s exciting to other quickcheck nerds, but honestly we’re not the target demographic here. Any quickcheck nerds who are writing Python should be just taking one look at Hypothesis and going “oh yes. It must be mine. Yes”, because honestly… the other quickcheck libraries for Python are terrible. I don’t like to trash talk (OK, I love to trash talk, but it’s uncouth), but if your quickcheck library doesn’t do example minimization then it’s a toy. Hypothesis started better than all the other quickcheck libraries, and it’s gone from strength to strength.

So coverage doesn’t break the API and doesn’t excite people, so it’s out. I’d like to do it at some point, but for now it’s just not important.

Reporting is useful, but it’s both API non-breaking and boring, so that’s out too.

So what’s in instead?

It’s a short list:

  1. Django integration

OK. That’s a slight lie, because it turns out there’s a feature I need for the Django integration (details why to come in another post). So the actual list:

  1. Bringing in the ideas from testmachine
  2. Django integration

And this is actually very compatible with both, because the testmachine ideas are among the ideas most likely to completely break the API. I… am not actually sure what I was thinking when trying to relegate them to a post 1.0 release – I have some ideas for how they could have been integrated without changing the API much, but I think the result would have been really painfully limited in comparison to what I could do.

So what will the django integration look like?

It will look like, if you will excuse me briefly violating my blog’s editorial guidelines, fucking magic is what it will look like.

You see, Django models come with everything Hypothesis needs to know how to generate them. They’re basically giant typed records of values. Hypothesis loves typed records of values. When this works (and I pretty much know how to make all the details work), the django integration is going to look like:

from hypothesis.extra.django import TestCase
from hypothesis import given
from myproject.mymodels import SomeModel
from myproject.myforms import SomeForm
 
class TestSomeStuff(TestCase):
    @given(SomeModel)
    def test_some_stuff(self, model_instance):
        ...
 
    @given(SomeForm)
    def test_some_other_stuff(self, form_instance):
        ....

That’s it. No fixtures, no annoying specifications of different edge cases for your model to be in, just ask Hypothesis for a model instance and it will say “sure. Here you go”. Given the upcoming fake factory integration it will even do it with nice data.

And, well, this may not be research level innovation on the theme of Quickcheck, but as far as compelling marketing messages for bringing property based testing to the masses I can’t really think of a better one than “Hey, all of you people using that popular web framework, would you like to do more testing with not only zero extra effort but you can also throw away a bunch of laborious boring stuff you’re already doing?”

So that’s the plan. I suspect this is going to cause a it of slippage in the timeline I’d originally intended but, well, I was also planning to be looking for jobs this month and I’ve mostly thrown myself into Hypothesis instead, so it’s not like I don’t have the time. I’m going to try to still be on track for an end of February release, but it’s no big deal if it ends up creeping into March.

This entry was posted in Hypothesis, Uncategorized on by .

Some thoughts on open source

Historically my attitude to open source has been one of avoiding the GPL like the plague and sticking everything under a permissive license.

There are a couple reasons for this:

  1. Generally speaking I want people to use my stuff. The GPL acts as an obstacle to that.
  2. I want to spread my ideas. The GPL acts as an obstacle to that too, although less so.
  3. Most of my open source code is not stuff I’m emotionally attached to and frankly if someone wants to take it off my hands that’s great – it means I don’t have to maintain it
  4. I’ve used a lot of open source code in my commercial work that I wouldn’t have been able to use if it was GPLed. It seems churlish to not return the favour.

I’m… starting to rethink some of this attitude.

Here are some links:

  1. How to capture an open source project
  2. Github is not your CV
  3. The Ethics of Unpaid Labor and the OSS Community

Fundamentally, open source may claim to be about freedom, and most people who do it may even have that intent, but when judging systems intent is irrelevant, what matters is the structural effect.

And the structural effect of the current relation between the vast quantity of permissively licensed open source work and the industry is one of overvaluing people who are capable of putting in work for free, devaluing everyone’s labour, and allowing the industrial scale parasite of VC backed startups to leech yet more value from the commons without putting anything back at all.

And it’s at this point that I go have an existential crisis and go gibber in the corner for a bit.

Because this all ends up falling under what I regard as the fundamental question of my life: How can I make the world better while not also making the world worse and make enough money to eat at the same time?

Answers on a postcard. Or a blog comment. Or anything really because if you’ve got the answer can you please let me know? Because I’ve no idea.

Because the stuff I give away for free is the closest I’ve come to achieving the first even if it falls down hard on the latter. It sounds terribly megalomaniacal, but the point of this blog is at least partly to make the world a better place, and the point of Hypothesis is explicitly to raise the software quality bar for software written in Python (This is part of why I’m putting in quite so much work on the edge cases and things like cross platform compatibility. I don’t want people to be able to say they can’t use Hypothesis).

But… I increasingly feel like giving things away for free is also failing pretty hard on the second criteria: I may be making the world a better place, but I’m also making it a worse one by furthering an exploitative system.

And I’m not really sure what to do about it, but I think at the bare minimum it’s time to start pushing back a little.

And I think in the spirit of that bare minimum I’m going to be moving away from licenses that are quite as permissive as I’ve historically chosen. Hypothesis is under the MPLv2, as will all future open source work I do until I decide I have a better idea. It manages to strike a good balance between the desire to get other people to use it and taking a bit of a stronger stance on things. I’m also requiring a CLA assigning all copyright to me, though I’m less convinced that that’s a good general principle.

I’m also going to try to figure out how to get paid for this. It feels… embarrassingly self-serving to think in terms of being morally obligated to get paid for my work, but I think it’s valid. I’m only able to do this free work because I’m in the privileged class of people who have enough time and money to do so. For everyone else it’s not a moral issue, it’s just a flat out issue. It may not feel like it, but providing push back on doing free work is at least partly an attempt to abdicate some of that privilege.

(To be clear: I also want to be paid to work on Hypothesis for purely self serving reasons)

This doesn’t mean I’m not going to work on Hypothesis if I can’t get paid to do so, and it doesn’t mean that the wellspring of random half-baked ideas that is my Github account is going to dry up, so maybe this is a promise without teeth. I don’t know. Getting paid for open source work outside of a few big projects (there are a lot of paid Linux devs) seems to be one of the great unsolved problems of our industry, and I don’t really want to predicate something I regard as as important as Hypothesis on solving that problem, and until Hypothesis starts to see the sort of traction I’d like it to, getting people to pay me for it might be something of a hard sell.

Which basically puts me back where I started, just with a slightly less permissive license that if I’m being honest doesn’t really affect much of anything. It’s not a solution, but maybe it’s at least the start of one.

 

This entry was posted in Uncategorized on by .