So I wrote a thing for the Lumi blog explaining how the new personalised topic timelines work.
I’ve really enjoyed implementing this chunk of work. Part of this is that I get to play with recommender systems and scoring algorithms and other fun stuff like that. Part of it is that I’ve been enjoying the process of developing it. It’s been a while since I’ve got to come into an unfamiliar code base of this complexity and been given the mandate to make sweeping changes to it (the last time would have been probably about 4-5 years ago at Trampoline Systems when I had only about 2-3 years of dev experience under me and was much less sure what on earth I was doing). So as well as doing the work I’ve been watching myself do the work and observing how I’m applying the tools of thought I’ve built and sharpened in the meantime to the task, and seeing if there’s anything interesting emerging that I want to pull out of the process and reify into something I can use again.
Well I spotted an interesting one. It’s something I’ve definitely found myself doing in the past, but I’d previously done it in a reversed order that caused me not to notice: When I design a system I usually start with an abstract model which I think is powerful enough to do what I want and flexible enough that I can modify it when I find that it doesn’t. So I start with a mental model of how the system is going to work and then add patches to it as I implement it and encounter how the real world is messy and complicates my nice abstract design.
Sometimes it turns out that the abstract design is flawed and I just have to replace it. Most of the time though I find it basically works except details, and those details are ones you can ignore unless you’re actually looking right at them, so I basically store it as “Here’s how to think about it. Here are some details that deviate from that”. It’s essentially what I said previously about having mental models as variations on mental models I’m already familiar with except the mental model I’m patching against is for something that never really actually existed.
What I’ve caught myself doing with the timeline work is reversing this process. Thinking back, this is definitely something I’ve done for a while but I’ve not been quite so overt about it and it feels like it’s worth drawing attention to.
Essentially the idea is I’ve built a mental model of how I want the timelines to work rather than how they actually work. Oh, I’ve got a mental model of the grim reality too, but I keep it alongside the shiny pretty one of how it’s supposed to work. I haven’t just pulled this out of thin air, I’m pretty sure it’s quite close to the intentions of the original authors. Certainly I talked to them heavily in building it. Bits of it though are definitely wholly of my creation.
How do I use this mental model? Well part of it is just as a vehicle for understanding – well abstracted models are easier to reason about. But I’m not that bad at reasoning about the detailed realistic model either.
I think the main use of this model though is that it’s a vehicle for change. When I’m trying to reason about how to improve things or make a new feature, I first design it against the aspirational mental model of how I want things to work. I then check it against the real model of how things actually work. Sometimes the result is that actually my idealised version is completely impractical and I stick it on the shelf of ideas that don’t currently work but I should remember to look at again later. Sometimes the result is that it will work fine with no real difficulties and I just implement it.
Often though the result is something more interesting: It will almost work, but there’s this detail where the real and the idealised versions of how things work don’t quite line up.
This can then be resolved in one of two ways. One is that this has exposed a flaw in the aspirational mental model and I have to update it (this doesn’t happen too often). The other is that I fix the code to look more like the aspirational mental model.
And this is really what I mean when I say that the model is aspirational. It’s not exactly a design document. It’s more of a tool to guide refactoring. It lets me simultaneously have the “system I would design if I could do a complete rewrite” in my head and not do that complete rewrite but still get about 60% of the benefits of doing so by fixing it whenever the fact that I haven’t gets in the way.
I’m still observing the effects of this and trying to decide whether it’s a genuinely good idea or whether I’m over-fitting, but so far it seems pretty good.