Programmer at Large: How old is this?

This is the latest chapter in my web serial, Programmer at Large. The first chapter is here and you can read the whole archives here. This chapter is also mirrored at Archive of Our Own.


I awoke to the usual morning startup screen – diagnostics for my sleep, news, patches to my software, etc. Nothing very exciting. I was being gently chided for not having had a proper dinner yesterday – two protein bars after the gym really don’t count – but the system had replaced the worst of it while I slept and informed me I should just have a large breakfast to compensate for the rest. Other than that, all systems looked good.

I acknowledged the instruction for breakfast and spent another hundred or so seconds reviewing the data, but it was all pretty boring.

The correct thing to do at that point would of course have been to shower, get dressed, and go get some food as instructed, but something was nagging at me from yesterday’s work… I checked, and it was fine for me to put off breakfast for another 5 ksec or so (though I got the irrational sense that the system was judging me for it), so I decided to just go straight to work.

“Ide, how many temporary patches do we have in the system?”

“Four million, five hundred thousand, one hundred and seven.”

I squeaked slightly.

“WHAT?”

“Four million, five hundred thousand, one hundred and seven.”

OK, fine. I was expecting a large number. I wasn’t expecting it to be that large – I’d have thought maybe ten thousand at the upper limit – but it didn’t actually make a difference. Either way it was too large to handle manually, so it was just a question of getting the software right.

Still, more than four million? I know there’s nothing so permanent as a temporary fix, but that’s just ridiculous.

“Ide, what sort of time frame does that span?”

“The oldest is approximately 0.26 teraseconds old.”

“Wow, OK.”

That didn’t make any sense. The current patch system isn’t even that old. The trade fleet is barely that old.

“Ide, how are you defining temporary patch?”

“I have a complex heuristic subprogram that indexes logical patches from either the fleet system or imported software strata and looks at metadata and comments to flag them as temporary.”

“Oh, right. If you just look at trade fleet authored patches in the modern system which have been explicitly flagged as temporary how many are there?”

“One million, sixty two thousand and eight.”

“Ugh. And when is the oldest of those?”

“Approximately 0.15 teraseconds old.”

This was not going to be a productive line of inquiry, but curiousity got the better of me once again.

“OK, show me the oldest.”

ID: 3b2ca7b197f9c65e883ef177178e20e6bb14b...
Author: Demeter [bim-bof 3 of the Entropy's Plaything née Wild Witless Bird]
Revert-On: Closure of f265957e0a2...

Add a flag that deliberately ties the theorem prover's hands by restricting
the set of valid interleavings when running in time travel mode.

Why? Well it turns out that *some* Linux descendants have a very peculiar idea
of how x86 is supposed to work. This idea is backed up by neither the spec,
nor by the actual physical machines that existed back when x86 was still a
real thing rather than an interop standard.

How did that happen? Well this wasted code comes from a descendant of the
"Corewards Bound", who at some point introduced a bug in their
implementation which made things run faster and didn't obviously break any of their
software. When they found this problem a few hundred gigaseconds later they decided
to patch Linux instead of their misguided grounder-inspired broken emulation software.
Nobody backed it out later, it got passed down through three generations of ships,
and finally got handed over to us and now we're stuck with it.

This patch is stupid and should go away once the referenced issue is resolved.

I looked at the patch. It was some really hairy logic right down at the formalism layer for one of our emulators. I had absolutely no idea whatsoever what was going on in it. I didn’t know the language it was written in, I can barely read either x86 or the intermediate representation it was being compiled to, even with Ide’s assist, and besides I don’t know half of the maths required to understand why it was doing what it was doing.

The referenced issue was about patching Linux to not depend on this broken behaviour. It had reached almost a million words of discussion before just trailing off – I think given the timescales involved everybody who cared about it had died of old age. Or maybe we just lost touch with them – neither this code nor the code it patched were from anywhere in our direct lineage.

“Ide, how many services are running with this flag set?”

“Seven”

I breathed a sigh of relief. That was a much better answer than I was afraid of.

“When was the last time any of them had a bug?”

“No bugs have ever been reported in these services.”

“OK. How about in their service categories?”

“Approximately 80 gigaseconds ago.”

“What was it?”

“The Untranslatable Word passed through an area with an unusually high helium mix in the local interstellar medium. This increased the crash rate of the process by 15%, which triggered a maintenance alarm.”

“How was it fixed?”

“The alarm threshold was raised by 50%.”

OK. So I’d found a weird hack in the implementation of some extremely reliable services. My duty was clear: Do nothing, touch nothing, make like a diasporan and leave at extreme velocity. I was more likely to break something by trying to “fix” it than do any good by touching this.

Time to back up and look at the actual problem.

“OK. How many temporary patches are there that were created on the Eschaton Arbitrage which apply to some but not all running processes in a category, have a trigger to revert on a hardware change, but predate our last planetary stop?”

“Nine”

OK. Now we were getting somewhere.

I spent the next few ksecs triaging those nine manually. They all looked pretty harmless, but I bet there were some gremlins that they’d flush out when the relevant teams looked into them. That definitely wasn’t going to be my job though.

After that, I wrote up a wiki entry about this problem and filed an issue making some general suggestions for how we could improve the systems around this to stop it happening again. I wasn’t very optimistic it would go anywhere, but it was at least worth making the effort.

At which point, I decided it really was time for breakfast, and headed to the showers to get ready for the day.

I showered quickly, dressed, and spent a few hundred seconds dealing with my hair. I found the style my schema gave me a bit boring so I spent a little while tweaking the parameters to give something less symmetrical.

Eventually I got my hair decisions resolved, and headed for the common area for my much delayed breakfast.


Next chapter will be in two weeks (April 7th).

Like this? Why not support me on Patreon! It’s good for the soul, and will result in you seeing chapters as they’re written (which is currently not very much earlier than they’re published, but hopefully I’ll get back on track soon).

This entry was posted in Fiction, Programmer at Large on by .

How and why to learn about data structures

There’s a common sentiment that 99% of programmers don’t need to know how to build basic data structures, and that it’s stupid to expect them to.

There’s certainly an element of truth to that. Most jobs don’t require knowing how to implement any data structures at all, and a lot of this sentiment is just backlash against using them as part of the interview process. I agree with that backlash. Don’t use data structures as part of your interview process unless you expect the job to routinely involve writing your own data structures (or working on ones somebody has already written). Bad interviewer. No cookie.

But setting aside the interview question, there is still a strong underlying sentiment of this not actually being that useful a thing to spend your time on. After all, you wouldn’t ever implement a hash table when there’s a great one in the standard library, right?

This is like arguing that you don’t need to learn to cook because you can just go out to restaurants.

A second, related, point of view is that if you needed to know how this worked you’d just look it up.

That is, you don’t need to learn how to invent your own recipes because you can just look it up in a cook book.

In principle both of these arguments are fine. There are restaurants, there are cook books, not everybody needs to know how to cook and they certainly don’t need to become a gourmet chef.

But nevertheless, most people’s lives will be improved by acquiring at least a basic facility in the kitchen. Restaurants are expensive and may be inconvenient. You run out of ingredients and can’t be bothered to go to the store so you need to improvise or substitute. Or you’re just feeling creative and want to try something new for the hell of it.

The analogy breaks down a bit, because everybody needs to eat but most people don’t really need to implement custom data structures. It’s not 99%, but it might be 90%. Certainly it’s more than 50%.

But “most” isn’t “all”, and there’s a lot of grey areas at the boundary. If you’re not already sure you need to know this, you can probably get on fine without learning how to implement your own data structures, but you might find it surprisingly useful to do so anyway. Even if you don’t, there are some indirect benefits.

I’m not using this analogy just to make a point about usefulness, I also think it’s a valuable way of looking at it: Data structures are recipes. You have a set of techniques and tools and features, and you put them together in an appropriate way to achieve the result you want.

I think a lot of the problem is that data structures are not usually taught this way. I may be wrong about this – I’ve never formally taken a data structures course because my academic background is maths, not computer science, but it sure doesn’t look like people are being taught this way based on the books I’ve read and the people I’ve talked to.

Instead people are taught “Here’s how you implement an AVL tree. It’s very clever” (confession: I have no idea how you implement an AVL tree. If I needed to know I’d look it up, right?). It’s as if you were going to cookery school and they were taking you through a series of pages from the recipe book and teaching you how to follow them.

Which is not all bad! Learning some recipes is a great way to learn to cook. But some of that is because you already know how to eat food, so you’ve got a good idea what you’re trying to achieve. It’s also not sufficient in its own right – you need to learn to adapt, to combine the things you’ve already seen and apply the basic skills you’ve learned to solve new constraints or achieve new results.

Which is how I would like data structures to be taught. Not “Here is how to implement this named data structure” but “Here is the set of operations I would like to support, with these sorts of complexities as valid. Give me some ideas.”

Because this is the real use of learning to implement data structures: Sometimes the problem you’re given doesn’t match the set of data structures you have in the standard library or any of the standard ones. Maybe you need to support some silly combination of operations that you don’t normally do, or you have an unusual workload where some operations are very uncommon and so you don’t mind paying some extra cost there but some operations are very common so need to be ultra fast.

At that point, knowing the basic skills of data structure design becomes invaluable, because you can take what you’ve learned and put it together in a novel way that supports what you want.

And with that, I’ll finish by teaching you a little bit about data structures.

First lets start with a simple problem: Given a list of N items, I want to sample from them without replacement. How would I do that with an O(N) initialisation and O(1) sample?

Well, it’s easy: You create a copy of the list as an array. Now when you want to sample, you pick an index into the array at random.

Now that you have that index that gives you the value to return. Replace the value at that index with the value that’s at the end of the array, and reduce the array length by one.

Here’s some python:

def sample(ls, random):
    i = random.randint(0, len(ls) - 1)
    result = ls[i]
    ls[i] = ls[-1]
    ls.pop()
    return result

Now I’ve given you a recipe to build on, lets see you improve upon it!

  1. If you assume the list you are given is immutable and you can hang onto it, can you improve the initialisation to O(1)? (you may need to make sampling only O(1) amortised and/or expected time to do this. Feel free to build on other standard data structures rather than inventing them from scratch).
  2. How would I extend that data structure to also support a “Remove the smallest element” operation in O(log(n))? (You may wish to read about how binary heaps work). You’ll probably have to go back to O(n) initialisation, but can you avoid that if you assume the input list is already sorted?
  3. How would you create a data structure to support weighted sampling with rejection? i.e. you start with a list of pairs of values and weights, and each value is sampled with probability proportionate to its weight. You may need to make sample O(log(n)) to do this (you can do it in expected O(1) time, but I don’t know of a data structure that does so without quite a lot of complexity). You can assume the weights are integers and/or just ignore questions of numerical stability.
  4. How would add an operation to give a key selected uniformly at random to a hash table? (If you haven’t read about how pypy dicts work you may wish to read that first)
  5. How would you extend a hash table to add an O(log(n)) “remove and return the smallest key” operation with no additional storage but increasing the insert complexity to O(log(n))? Can you do it without adding any extra storage to the hash table?

These aren’t completely arbitrary examples. Some of them are ones I’ve actually needed recently, others are just applications of the tricks I figured out in the course of doing so. I do recommend working through them in order though, because each will give you hints for how to do later ones.

You may never need any of these combinations, but that doesn’t matter. The point is not that these represent some great innovations in data structures. The point is to learn how to make your own data structures so that when you need to you can.

If you want to learn more, I recommend just playing around with this yourself. Try to come up with odd problems to solve that could be solved with a good data structure. It can also be worth learning about existing ones – e.g. reading about how the standard library in your favourite language implements things. What are the tricks and variations that it uses?

If you’d like to take a more formal course that is structured like this, I’m told Tim Roughgarden’s Coursera specialization on algorithms follows this model, and the second course in it will cover the basics of data structures. I’ve never taken it though, so this is a second hand recommendation. (Thanks @pozorvlak for the recommendation).

(And if you want to learn more things like this by reading more about it from me, support me on Patreon and say so! Nine out of ten cats prefer it, and you’ll get access to drafts of upcoming blog posts)

This entry was posted in programming, Python on by .

An untapped family of board game mechanics

Have you noticed how board games are in dire need of electoral reform?

No, wait, seriously. Hear me out. This makes sense.

In most board games, through a series of events, you acquire points (votes), where each point (vote) goes to at most one player, and then at the end the person with the most points (votes) wins outright, regardless of how narrow their margin is.

It’s literally plurality voting, and it leads to a number of the same problems.

One of those problems is that it amplifies small leads quite substantially. e.g. take Scrabble. I play a lot of Scrabble because it’s more or less our family game. I also win a lot of Scrabble. Sometimes comfortably if I managed to get that second bingo out, but often by fairly small margins – 10-20 points in a 300-400 point game is not a large victory, but with plurality victory that doesn’t matter, it’s still a victory.

It also creates spoiler effects, where you have players who obviously can’t win but still participate in pile-ons to people in the lead. e.g. if you’ve ever played a Steve Jackson game (Munchkin, Illuminati, etc) you’ve probably seen this in action – “They’re about to win! Get them!”.

Certainly not all games match this description: e.g. you get a lot of games where rather than scoring points there is a defined victory condition – war games where you take a rather different view of politics and must kill all your opponents (e.g. Risk), or hidden mission games where you must achieve some specific goal unknown to the others (e.g. Chrononauts). I’ll consider that sort of game out of scope here.

Some games you play until someone has hit some defined score and then they immediately win (e.g. Love Letter). This is a bit like majoritarian vote systems (where you only win if you get more than 50% of the vote), but not really.

So the analogy isn’t perfect, but I think it has enough merit for the games it applies to to be worth exploring how things can be different. If nothing else it might be an untapped source of interesting game mechanics.

Even within the scope of games which look like elections if you squint at them hard enough there’s some variation.

For example, I’m aware of at least one game which uses random ballot: Killer Bunnies and the Quest for the Magic Carrot.

In this game you acquire carrots (votes), then at the end of the game a winning carrot (vote) is drawn at random and the person who owns that carrot (had that vote cast for them) wins (is elected). So if you own 60% of the carrots then you have a 60% chance of winning.

Given that I’m generally really rather keen on random ballot it may come as some surprise that I think this is a terrible game mechanic.

The problem with random ballot in this context is both that Random Ballot isn’t great for presidential style single winner elections, and also that in the context of a game players (politicians) matter more than carrots (voters). Voters have a right to be well represented in the decision of which politician eats them, carrots don’t.

If the game were very short it would probably be OK, but getting to the end of a long game and then having victory be decided by quite such a large amount of luck is rather frustrating. I think there are ways to fix it and make random ballot a fun game mechanism, but I also worry that it’s a bit too close to existing scoring mechanisms and where it produces different answers players will mostly find it frustrating.

So instead I’d like to look at how you could use an entirely different class of electoral system to produce this effect: Condorcet systems.

The idea of a Condorcet system is that instead of focusing on point scoring you focus on pairwise victories: Which player beats which other player. If there is a player who beats every other player in a one on one victory, they are the Condorcet winner and win outright.

Depending on who you ask, electing the Condorcet winner is either a terrible or a great idea (my personal biases are “It depends what you’re electing them for but all else being equal the Condorcet winner is probably a pretty good choice”), but that doesn’t actually matter here, because the question is not whether it’s a great election system but whether it leads to an interesting game design!

And I think it does.

When there is no Condorcet winner interesting things happen where you get rock-paper-scissors like events where the majority prefers A to B, B to C and C to A. These are called Condorcet cycles. In this case you need some sort of alternate rule to decide which player is most like the Condorcet winner (this can also happen if you get candidates who are tied because an equal number of people prefer each to the other, but you can avoid this possibility by just having an odd number of voters).

There are a wide variety of systems for deciding who the “most Condorcet” candidate is when there is no true Condorcet winner. These range from the very simple to the very complicated, but there’s a particularly simple Condorcet system that I think is very suitable for game design.

It works as follows: You pick an incumbent (usually at random). Then every other candidate (player) challenges the incumbent in some order (usually random). If the majority strictly prefers them (they beat them according to some as of yet unspecified mechanism) then the incumbent drops out and the challenger becomes the incumbent, else the challenger drops out. Once all but one candidate has dropped out, the remaining incumbent is the winner.

There are a number of free variables in how you could turn this into a game mechanic:

  1. Who gets to be starting incumbent?
  2. What determines who wins in a head on head fight?
  3. What order do people challenge in?
  4. Who wins ties?

However you pick these free variables though, I think it’s most interesting if you do this in such a way that allows the possibility of Condorcet cycles (if you don’t it’s really just another scoring system). In order to do this you need something that looks like at least three voters.

The easiest way to do this is might be something like the following:

The game consists of resource acquisition in some manner. You have three resources and treat each as a vote. Each resource is claimed by whichever of the two players owns strictly more of it. If either player claims more resources than the other, that player wins. Otherwise, apply some tie breaking procedure.

Starting from that concept, you can elaborate itinto a full game. The following is a toy game that might work well (but probably would need significant refining through play testing) based this. It’s called “The King is Dead!”

The king is on his death bed and has no natural heir. He has named one, but the nobles are competing to change his mind, and regardless of who he chooses they might not last long enough to take the throne.

There are a few key nobles who might win, but their chances of victory are slim without the support of the two key factions of the kingdom: The peasants, and the clergy. The nobles must use their influence at court to curry favour with these two factions, but be careful! If you use all your influence outside the court, there may be no-one left to support you when you fall afoul of court intrigue.

Game setup:

  • An heir is picked at random from the players.
  • Three cards per player are drawn from a larger deck and are shuffled together to form the issue deck.
  • Each player is given twelve influence tokens.

Each issue card has a number of clergy points and a number of peasant points on it. It may also have a crown on it

The game is played in rounds which proceed as follows:

  1. A card from the top of the deck is revealed.
  2. The card is now auctioned off with an English auction: The heir bids zero on the card and then play proceeds clockwise, with each player either placing a higher bid or dropping out. Once everyone has dropped out except for the last bidder, that bidder gets that card, places it in front of them, and puts their bid in the centre temporarily.
  3. If the card had a crown on it, the person who won it now becomes heir.
  4. The bid is now redistributed amongst the players: One token at a time, starting from the heir (the new one if it changed places!) and proceeding clockwise until all the tokens have been redistributed

This process completes until the deck is out of cards. The king then dies, and a swift but bloody war of succession occurs.

Starting to the left of the current heir and proceeding clockwise, each player (not counting the player who was heir when the king died) gets the opportunity to challenge the current heir.

A challenge pits the two players against each other as follows:

Each player accrues a victory on each of three scores: Their total support from the peasantry, their total support from the clergy, and their total influence. If one player has a higher score than the other on each of these, they get a victory.

If one of the two has more victories than the other, that player wins. If they are tied, the current heir wins.

Whichever player won is now the new heir. If there are any more challengers left, the next challenger steps up and challenges the heir, otherwise they win and are crowned. Long live the king!

This game definitely needs play-testing and will almost certainly have a number of problems with it, but I think it’s an interesting corner of game design that I haven’t seen anything much like before.

The thing I like about it is also that it creates a much more interesting victory dynamic in most cases: In the case where someone wins really decisively then they still cream everybody else, but the back and forth between the three resources more or less guarantees that nobody will be a Condorcet winner unless everyone else played really badly – the more they spent to get that faction support, the more influence they gave to other players that they could use to claim some other support. This makes things tense right up until the end game.

More generally, I think there’s a rich seam of game design ideas in electoral theory that hasn’t been tapped much. There’s a theorem that says that almost any voting system admits tactical voting. For electoral system design this is a real pain, because systems which encourage tactical voting have a number of problems, but for game design it’s perfect because it means that there’s this giant body of well studied and powerful mechanics that are full of opportunities for tactical play.

(Like my writing? Want to see more of it? Support me on Patreon! Patreon subscribers saw this posts months ago, because all my drafts get posted there when they’re ready)

This entry was posted in Games, voting on by .

Programmer at Large: Who wrote this?

This is the latest chapter in my web serial, “Programmer at Large”. Previous chapters are best read at the Archive of our Own mirror. This chapter is also mirrored there.


I was only asleep for about a kilosecond before I started getting an alert from the system. It was quite reasonably informing me that if I insisted on sleeping in the hot tub then perhaps I should consider doing it face up?

I certainly hadn’t started face down in the water, so apparently I’d rolled over in my sleep at some point. I drifted for another couple of tens of seconds and then finally decided to acknowledge that OK yes breathing was useful. I rolled back over and sighed dramatically (important to get the order of those right).

I really hadn’t wanted to fall asleep there. Unsupervised sleep is awful even in zero gravity. In gravity you also have to deal with nonsense like which way up you sleep. I mean really, why should that matter?

Between the exercise, the heat, and the bad sleep I now had an annoying nagging headache and an overwhelming urge for food, water and painkillers, more or less in increasing order of priority.

I put in an order to the nearest delivery slot and heaved my way out of the hot tub to go have a shower while I waited for them to arrive.

I’m not too proud to admit that I shrieked when the cold water hit my head. It’s supposed to be very good for you after the hot tub but I was still half asleep and even at the best of times I usually manage to forget that these showers aren’t kept at a sensible temperature.

I lasted about 40 seconds before I decided enough was enough. I did feel better afterwards, but I swear that’s mostly because of how glad I was to have it stop.

Yes, I know that what’s good for you isn’t the same as what you enjoy. I’ve heard it enough times by now.

I dried off and quickly put my hair up into a bun – I really couldn’t be bothered to do it properly at that point – and by the time I was done with that the delivery had arrived, so I padded over to the nearby delivery point and tore into it.

I put the painkillers in my shunt and gulped down most of the electrolytic drink. Once my thirst had been slaked and the painkillers had kicked in, I turned my attention to the protein bars and devoured them in a few bites.

None of it would be particularly tasty in normal circumstances, but post-gym hunger is a harsh master and salt, water, sugar and protein was exactly what I needed at that point.

I removed the empty painkiller capsule and put it and the empty packaging back in the compartment to be taken for recycling. I figured it would take a while for things to kick in, so now was as good a time as any to get around to that meditation I’d been putting off.

I sat down cross legged (I can do a full lotus, but I couldn’t really be bothered. I know it’s cultural, but I also know the science says it doesn’t help) and called up my program. In the end it took me almost two kiloseconds to work through it – I’m not very good at meditation in the first place, and I was still feeling a bit twitchy from my impromptu nap, but eventually I got my mind into the right state and after that it proceeded more smoothly.

By the end of my meditation I was feeling a lot more human. My headache had subsided, along with the hunger and thirst. I went through some finishing stretches to undo the sitting – yet another reason why gravity is awful. Those finished, I fetched a clean uniform from the wall and changed into it.

I called up an image of myself and quickly checked my hair – I still couldn’t be bothered with more than a bun, but there’s no point in looking outright scruffy – and fixed a few stray bits at the back that I’d missed.

I decided I’d really had enough of people for now, so I got the transit chair back to the main ship and tucked myself away in a quiet pod to work on my Evolve strategies for a kilosecond or five.

Eventually, though, I got curious about work, and my schedule told me I was unlocked for it again and was welcome to resume if I wanted to, so I did.

I’d left some of my prototype zombie hunters running while I was away. They weren’t reporting anywhere except privately to me – I was sure they had bugs in them, but looking at some of the answers they gave now and seeing if they were right would help me figure out what those bugs were.

“Ide, how many potential zombies have my new hunters flagged up?”

“147”

“Ugh. All right, show me.”

I spent some time looking through the list and filtering things down. A bunch were false positives as expected – some interfaces that I had treated as read only in my original criteria were actually read/write but used some obscure different convention due to historical reasons – but eventually after some filtering I’d narrowed it down to 31 that were probably legitimate.

After a while of scanning through them at a high level and doing some basic triage I spotted one that looked interesting. I dug into it for a couple of kiloseconds until I was sure I understood what was going on, but it was exactly what it looked like.

Which left me with a dilemma: I was going to have to tell Kimiko about this. I didn’t want to seem too needy though, so it felt a bit soon to get in touch with them.

I dithered for a couple hundred seconds, but eventually concluded that I was being stupid. Even if I’d never met them I’d want to contact them about this, so putting off telling them about it because I did know who they were was just ridiculous.

I checked their status and they were apparently awake and working, so I opened up a line.

Arthur [vic-taf]: Hey, Kimiko?
Kimiko [jad-nic]: Oh, hey Arthur. What’s up?
Arthur [vic-taf]: I found some broken processes when I was looking into that bug for you, which has sent me off on a zombie hunt. It’s been running for the last couple of tens of kiloseconds and it surfaced something you should probably know about.
Kimiko [jad-nic]: That’s great, I’d love to hear about it later, but I’m kinda in the middle of figuring this yeast problem so do you mind if we take a pause on you telling me about it?
Arthur [vic-taf]: Actually this is about the yeast problem. I think. Maybe. Did you know the nutrient feed for the vat it’s in isn’t working properly?
Kimiko [jad-nic]: … what.
Arthur [vic-taf]: The feedback loop isn’t running properly – the process that’s monitoring the nutrient levels in the vat has the control part of it patched out, so the feed is just defaulting to a standard rate of flow.
Kimiko [jad-nic]: Argh, waste it. That would do it. This yeast uses slightly more feed than the normal batch, so it’s eating through the available feed stock and then doesn’t have enough to replace it. No wonder the little wasters are going sexual.
Arthur [vic-taf]: Oh good. I wasn’t sure it was relevant, but it seemed too much of a coincidence to ignore.
Kimiko [jad-nic]: Yeah this is absolutely relevant and you’ve probably just saved me a couple tens of kiloseconds of work debugging this. Thanks! [200 millivotes kudos attached]. But why on the ground is that happening?
Arthur [vic-taf]: Apparently the control sensor on it broke when we were last interstellar and we didn’t have replacement parts, so it was patched out as a temporary fix.
Kimiko [jad-nic]: Yeah, I remember that, but we replaced the sensors in-system and that should have triggered the reset condition, right?
Arthur [vic-taf]: Well it should have, but we picked up a new design from the locals and it uses a new interface, but the patch was expecting the old interface so the reset didn’t trigger.
Kimiko [jad-nic]: So who did the patch anyway? I should give them a rap on the knuckles.
Arthur [vic-taf]: Oh, uh, heh. Funny story… [Patch reference attached]
Kimiko [jad-nic]: Huh. I do not remember doing that at all.
Arthur [vic-taf]: Sorry.
Kimiko [jad-nic]: No biggie. Anyway, I’m going to go untangle this and see if I can prove this was all that was going on. Thanks again!
Arthur [vic-taf]: No problem! Happy to help. Good luck.

I closed off the link.

That was satisfying. Even if the plumbing bug turned out to be completely innocuous, this line of work had proven definitely useful – sure they’d have figured out the bug with the vat in the end, but that wasn’t the point. The point was that the zombie detection worked and it worked well enough that if we’d been running it we’d have caught this bug before it actually ruined a batch of yeast.

Which, I decided, made this a good point to down tools. I was still feeling a bit off from my nap earlier, but some proper sleep would fix that.

I shut down my workspace, plugged into the wall, an initiated my sleep program. The lights dimmed, and after a few tens of seconds I was once again fast asleep.


Like my work and want to say thanks? Or just want to see more of it in advance? Support me on Patreon! You’ll get access to early drafts as and when they’re completed (although right now I’m a bit busy with PhD applications, so e.g. this chapter only got written late last night, but you’ll also see drafts of other blog posts).

This entry was posted in Fiction, Programmer at Large on by .

Looking into doing a PhD

As regular readers of this blog have probably figured out, I’m a researchy sort of person.

A lot of my hobbies – maths, voting theory, weird corners of programming, etc – are research oriented, and most of my work has had some sort of research slant to it.

The last two years I’ve basically been engaged in a research project working on Hypothesis. It’s come quite far in that time, and I feel reasonably comfortable saying that it’s the best open source property based testing library on most metrics you’d care to choose. It has a number of novel features and implementation details that advance the state of the art.

It’s been pretty great working on Hypothesis like this, but it’s also been incredibly frustrating.

The big problem is that I do not have an academic background. I have a masters in mathematics (more technically I have a BA, an MA, and a CASM. Cambridge is weird. It’s entirely equivalent to a masters in mathematics though), but that’s where I stopped. Although it says “DR” in my online handle and the domain of this blog, those are just my initials and not my qualification.

As a result, I have little to no formal training or experience in doing academic research, and a similarly low understanding of who’s who and what’s what within the relevant fields. So I’ve been reading papers and trying to figure out the right people to talk to all on my own, and while it’s gone OK it’s still felt like fumbling around in the dark.

Which leads to the obvious solution that I spoilered in the title: If the problem is that I’m trying to do research outside of an academic context, the solution is to do research in an academic context.

So I’d like to do a PhD that is either about Hypothesis, or about something close enough to Hypothesis that each can benefit from the other.

There’s probably enough novel work in Hypothesis already that I could “just” clean it up, factor it out, and turn it into a PhD thesis as it is, but I’m not really expecting to do that (though I’d like that to be part of it). There are a number of additional directions that I think it would be worth exploring, and I expect most PhD funding will come with a focus subject attached which I would be happy to adapt to (a lot of the most interesting innovations in Hypothesis came because some external factor forced me to think about things in ways I wouldn’t otherwise have!). If you’d like to know more, I’ve written up a fairly long article about Hypothesis and why I think it’s interesting research on the main Hypothesis site.

Which, finally, brings me to the main point of the post: What I want from you.

I’m already looking into and approaching potential universities and interesting researchers there who might be good supervisors or able to recommend people who are. I’ve been in touch with a couple (some of whom might be reading this post. Hi), but I would also massively appreciate suggestions and introductions.

So, if you work in relevant areas or know of people who do and think it would be useful for me to talk to, please drop me an email at [email protected]. Or just leave a comment on this blog post, tweet at me, etc.

This entry was posted in Hypothesis, life, programming, Python on by .