Monads as an interface for composition

Fair warning: I’m going to be talking about monads. It’s in the title and everything. This post is even a little bit of a monad tutorial, as I’m not going to assume that you know what a monad is, though that’s not the main point of it.

A thing I’ve been thinking about recently is that a lot of the success of QuickCheck and descendants as a way of getting people who are not testing specialists to use better testing techniques comes from the fact that random generation is a monad (QuickCheck’s use of it technically isn’t a monad, but the difference is unimportant enough that I can elide it for this post. You could fix this problem and literally nobody would notice).

A point in favour of this theory is that even though Hypothesis has abandoned normal random generation as its underlying method, it’s done so in favour of adopting a different monad (monadic parser combinators, if you’re curious).

Fundamentally the thing that makes QuickCheck so effective is that it is providing a library of composable data generators – you can take existing generators and turn them into generators for lists of that value, apply functions to them, etc.

This makes it extremely easy to define generators for your own data types, and the reason you can do that is fundamentally that the composition interface is the monad one.

When you first encounter Haskell you see something like the following:

main = do name <- getLine
          putStrLn $ "Hello " ++ name

And OK you might think it looks a little syntactically strange, but it basically looks like a normal program, and it’s only when people tell you that there’s something called a monad involved that it starts to seem like anything strange is going on.

But the first intuition is correct: Nothing strange is going on, and this code really does do the obvious thing (there’s a little bit of non-obviousness for people in terms of the difference between “let x = y” and “x <- y”, but I’m going to ignore that). It really is “a normal program”.

The thing that is unusual is not this program, but the fact that it is an instance of a more general one.
Suppose you instead had the following:

sayHello get put = do name <- get
                      put $ "Hello " ++ name
 
main = sayHello getLine putStrLn

And now we have this split into two parts, with some code that is abstracted over how you get and put the name (maybe you want to run this as part of a website as well as a command line? I don’t know. It’s an artificial example. Bear with me here), and a main definition that calls that abstracted function with concrete implementations for terminal I/O that read and write lines.

This is all type inferred, but lets be explicit about the types:

sayHello :: IO String -> (String -> IO ()) -> IO ()
sayHello get put = do name <- get
                      put $ "Hello " ++ name
 
main :: IO ()
main = sayHello getLine putStrLn

So sayHello takes one argument that is an IO String (“an effectful program that returns a string”) and one argument that is a String -> IO () (“a function that takes a string and does some effects”), and main just passes the corresponding values to it for our command line program. This is just extracting a function from our original program, business as usual except that because it’s Haskell we have a specific type for “things that have effects” and we have all the usual higher order function goodness.

Where monads come in is the observation that we’ve actually used nothing about IO in the definition of sayHello – all we’ve used is do notation and strings, so we could abstract it to anything that supports do notation, which is precisely the monad interface.

sayHello :: (Monad m) => m String -> (String -> m ()) -> m ()
sayHello get put = do name <- get
                      put $ "Hello " ++ name
 
main :: IO ()
main = sayHello getLine putStrLn

This is exactly the same program, all we’ve done is generalise the type signature for sayHello to only use the operations it requires from IO rather than the full complexity of the IO type.

(You could make this type signature more generic yet by replacing () with a type parameter, but I think the concrete version is clearer for explaining this point).

So that’s all that monads are: The type signature for things that support do notation (plus some laws that ensure do notation behaves reasonably sensibly), and do notation just looks like normal programs.

So what are monads? Well they’re a monoid in the category of endofunctors err a type class with the operations return and >>= satisfying the laws no basically a burrito in a space suit actually never mind it doesn’t matter what monads are, but the reason why monads are useful is that they represent data types that you can compose together with do notation, and do notation looks basically “like writing normal programs that return values”, which is extremely convenient because we can build on our normal experience and intuition about writing such programs.

That’s not to say monads are normal programs that return values, only that we can build them up as if we were. e.g. a monad might “return” no values. e.g. suppose we ran the above code with the maybe monad:

main :: IO ()
main = print $ sayHello Nothing (\_ -> Just ())

This takes our existing sayHello function but the get argument always fails (returns Nothing) and the put argument just returns () without doing anything (because the only things it could do are either that or fail). This program now prints the string Nothing, because rather than ever “returning a value”, instead the computation “failed”. You could think of this as saying that normal programs with exceptions also compose like monads. Similarly a monad might “return” multiple values (e.g the list monad), do backtracking (a parsing monad), or many other things.

What any given monad actually does can be almost arbitrarily complicated, but the monad interface isolates us from that by allowing us to compose them together as if they were such programs anyway.

In QuickCheck this comes in by letting us compose generators together easily using do notation:

integerRange :: Int -> Int -> Gen Int
integerRange = undefined -- Imagine we had a real implementation here
 
 
orderedPair :: Gen (Int, Int)
orderedPair = do a <- integerRange 0 10
                 b <- integerRange a 10
                 return (a, b)

The access to do notation means that these generators compose more or less how we’d expect programs to without us having to have any knowledge of the underlying implementation or source of randomness. Or even how these will be run – in QuickCheck, Gen more or less is just a program that returns a random value, but e.g. in Hypothesis one of the possibilities of drawing is that it may fail because it violates some assumption or is unable to generate a good value, and that can just as easily be rolled into the monadic interface.

Python doesn’t have do notation for monads, but it does have free side effects, so in Hypothesis we use the @composite decorator instead of do notation, but the motivation for that was absolutely “Boy I wish there was a way to compose Hypothesis generators that was as easy as do notation”.

But if we don’t have do notation, what does it being a monad buy us?

Well, directly not much. Oh, sure, if you have monads as a first class concept in your programming language, you can define a whole bunch of useful functions that take generic monads, but they’re not that useful. They’re nice to haves. You can just write the special case version in a couple of lines if you really need to.

But even without do notation, it’s not too hard to use the monadic interface. We could write the above as

orderedPair = integerRange 0 10 >>= \a -> (integerRange a 10 >>= \b -> return (a, b))

This does the same thing as the do notation, and it just uses more explicit chaining. It’s unwieldy, but people have shown themselves perfectly able to translate the sort of normal program composition reasoning when writing promises, and before I added @composite to Hypothesis people were happy to use flatmap (which is the Hypothesis equivalent of >>=), and this is no different just because it’s not tied down to a specific monad implementation.

Even if you never consciously express the fact that your data type is a monad and can’t represent it in your programming language, the fact that it is one still puts it in a particular corner of the design space, and it’s a particularly attractive one because of how easy monads are to use.

This ease of use is then a lot of what has enabled the popularity of QuickCheck – because it’s so easy to define generators for your own data types, people can and do, and this then enables QuickCheck to be used in so many contexts.

It’s also what constrains QuickCheck to being a particular local maximum I think. Ensuring the monad property is hard, because many natural operations do not compose monadically. Shrinking is an example where the natural composition is not monadic – it doesn’t compose like a normal function, because you have to be able to “reverse the arrows” – to turn a  Shrinker a into a Shrinker b, you need to be able to transform both from a to b and from b to a, and monadic composition isn’t bidirectional like that. test.check and Hypothesis both build shrinking into the generation to make it monadic (though test.check struggles with actually making the monadic composition shrink well).

The way Hypothesis escapes this is by building in some of these operations at a lower level. Instead of operating in a monad representing a random number generator, it operates in a parsing monad. In a parsing monad, “running” the monad corresponds to reading some bytes of input and either failing with a parse error or returning a value (possibly multiple values if you want to handle ambiguity). Parsing monads are neat and you can see a proper example of them in things like parsec, but Hypothesis gets to be relatively simplistic in its implementation of them because it doesn’t have to do anything like back-tracking.

The reason you can use a parsing monad for this is because non-determinism is a form of parsing: A pseudo-random number generator is basically “just” an infinite stream of bits, and consuming those bits to get non-determinism is just parsing the stream. You can then extend this to a monad which parses finite sequences of bits by having the parse fail if you ever read try to read past the end of the sequence. Shrinking in Hypothesis then becomes a matter of trying to minimize these finite input bit sequences in the shortlex order (first minimize the length, then minimize lexicographically. The latter lets us do things like shrinking integers towards zero). This lets us escape this local maximum by basically avoiding all the hard parts of composition: We just need to be able to construct a value, and all of the things that we manipulate are of a single concrete type, so we never need to do the reverse transformation back into the original type because shrinking isn’t operating on values at all, just on the underlying representation that is being parsed.

One of these days (hopefully soon) I’ll publish a paper expanding on that, but that’s just a taster and not the important part of this post. The important part of this post is that the public interface of Hypothesis and QuickCheck being a monad gives them very powerful composition properties that makes them much easier to use.

If you’re not writing a fairly general purpose library, you probably don’t need to understand this (not that you shouldn’t try, you just don’t need to). Just use the monadic interface and don’t worry about it too much – people are doing this all the time with specific things that happen to be monads and it works out great for them.

But if you are writing a fairly general purpose library – promises, test-case generation, etc. then the question “Can I make this type a monad?” seems to push you into a particularly useful corner of the design space, because it enables users to build instances of your type in a very pleasant to use way.

This entry was posted in programming on by .

3 thoughts on “Monads as an interface for composition

  1. Thomas Smith

    Yeah! I’ve had a similar experience with parsing libraries. A lot of parser generators put all the magic into a DSL that then has to link with the main language you’re using somehow. And the DSL isn’t a “real” language. Monadic ones like Parsimmon and ParCon are 100x easier to use than anything I had tried before.

    I still don’t know how to *write* a monad, having tried and failed, but I am a very happy end user.

    1. david Post author

      > I still don’t know how to *write* a monad, having tried and failed, but I am a very happy end user.

      I’m not sure that there’s a single thing that is “how to write a monad”. I think it’s more… you’re writing a thing and you go “Oh that looks like a monad!” and then you figure out how to implement the monad interface. Most of the times I’ve done it (which isn’t a lot) there’s an “obvious” way to do the sequencing operation that’s forced on you by the types. I’m sure there are some more complicated cases where this isn’t true, but I haven’t personally tried to implement any of them (my use of monads is relatively shallow).

      Maybe the thing to do if this is something you want to learn is try to figure out how to write some really basic monads? e.g. start with the identity monad, then the state monad, then say the monad instance for (->) a (i.e. the partially applied type a -> …, so the type parameter is the value of the right hand side).

      1. Thomas Smith

        Yeah, maybe my problem is that I’m starting with a difficult area (not exactly parsing but close). I got inspired by this post yesterday and read more tutorials (hah). I now feel like I could write Maybe and a few other simple ones. I’ve been looking at the source code of ParSec (the Python version), ParCon, funcparserlib, and so on. It’s confusing because the input of an unbound parsing function is typically the input string, but that input string isn’t part of the monadic value (which is just a position and some results-so-far).

        …Ohhh, by leaving the string out of the monadic value, they have an intermediate stage: applying a parser to the monadic value returns *another function*, rather than a “real” result. The returned function is one that already knows where in the input string it’s going to look for its fragment.
        https://github.com/vlasovskikh/funcparserlib/blob/master/funcparserlib/parser.py#L207
        https://github.com/sighingnow/parsec.py/blob/master/src/parsec/__init__.py#L138-L147

        Anyway, this is barely relevant, so feel free not to publish. Thanks for Hypothesis and your blog :)

Comments are closed.