Author Archives: david

Stargate physics 101

This is a piece of actual Stargate fan fiction I wrote (as opposed to the previous pieces which just sketched some things out). However some testing on a focus group (my flatmates), it appears to be perfectly comprehensible and entertaining if you’ve never seen the series. You’ll miss a bunch of the in jokes, but there are enough out jokes for you to get. Because what it actually is is a sci-fi comedy about software testing.

The main things you need to know from the series are:

  1. The system described is in use as the still functioning artefacts of a lost civilisation millions of years later.
  2. The majority of the ridiculous behaviours described are both still present at that time and believed to be fundamental features of wormhole physics.
  3. About half of the scenarios described occur at some point in the series.

Note: The story used to be here, but has now moved to AO3. Go read it there.

This entry was posted in Fiction, Stargate on by .

Honey I shrunk the clones: List simplification in Hypothesis

Simplification in Hypothesis is something of a vanity feature for me. I spend significantly more development effort on it than it strictly warrants, because I like nice examples. In reality it doesn’t really matter if the simplest value of a list of integers with at least three duplicates is [0, 0, 0] or [-37, -37, -37] but it matters to me because the latter makes me look bad. (To me. Nobody else cares I suspect)

So I was really pleased that I got simultaneous simplification in as a feature for Hypothesis 1.0. If a list contains a bunch of duplicate values (and because templating this is easy to check for – all templates are hashable and comparable for equality) before trying to simplify them individually, Hypothesis tries to simplify them in a batch all at once.

As well as solving the above ugly example, this turns out to be really good for performance when it fires. Even if your test case has nothing to do with duplication, a lot of the time there will be elements in the list whose value fundamentally doesn’t matter. e.g. imagine that all that is needed to trigger a failure is a list of more than 100 elements. The individual elements don’t matter at all. If we happen to have produced an example where we have a list of 100 elements of the same value, we can simplify this 100x as fast as if we had to simplify every individual element.

I’m doing a bunch of work on simplification at the moment, and as a result I have lots of tests for example quality of the form “In this circumstance, Hypothesis should produce an example that looks like this”. Some of them had a tendency to get into really pathological performance problems because they’re in exactly this scenario: They have to do a lot of individual simplifications of values in order to find an optimal solution, and this takes a long time. For example, I have a test that says that the list must contain at least 70 elements of size at least ten. This example was deliberately constructed to make some of the existing most powerful simplification passes in Hypothesis cry, but it ended up wreaking havoc on basically all the passes. The goal is that you should get a list of 70 copies of 10 out at the end, and the simplification run should not take more than 5 seconds.

This obviously is going to work well with simultaneous simplification: If you have a set of duplicate indices greater than 10, simultaneous simplification will move them all to 10 at once.

Unfortunately the chances of getting an interesting amount of duplication for integers larger than 10 is pretty low, so this rarely fires and we usually have to fall back to individual simplification which ends up taking ages (I don’t actually know how long – I’ve got a hard 5 second timeout on the test that it usually hits, but eyeballing it it looked like it got about halfway in that time).

So the question is this: Can we design another simplification pass that is designed to deliberately put the list into a state where simultaneous simplification can fire?

On the face of it the answer is obviously yes: You can just change a bunch of elements in the list into duplicates. Then if any of those are also falsifying examples we end up in a state where we can apply simultaneous simplification and race to the finish line.

Naturally there’s a wrinkle. The problem is that simplification in Hypothesis should be designed to make progress towards a goal. Loops aren’t a problem, but things which cause unbounded paths in the simplify graph will cause it to spend a vast amount of time not doing anything very useful until it gets bored of this simplification, declares enough is enough, and gives you whatever it’s got at the time (a lesson I should probably learn from my own creation).

Or, to put it more directly: How can we tell if a list of cloned elements is actually simpler. If we have [1, 2, 3, 4, 5, 6, 7, 8, 9999999999999999] we’d really much rather clone 1 all over the place than 9999999999999999.

The solution is to extend SearchStrategy with yet another bloody method (it has a default, fortunately), which allows it to test whether one of two templates should be consider strictly simpler than the other. This is a strict partial order, so x is not strictly simpler than x and it needn’t be the case that for any x and y one of the two is simpler. In general this is intended as a heuristic, so fast is better than high accuracy.

For the particular case of integers the rule is that every positive number is simpler than every negative number, and otherwise a number is simpler if it’s closer to zero.

We can now use this to implement a cloning strategy which always makes progress towards a simpler list (or produces nothing):

  1. Pick a random element of the list. Call this the original.
  2. Find the indices of every element in the list for which the original is strictly simpler.
  3. If that set is empty, nothing to do here.
  4. Otherwise pick a random subset of those indices (the current choice is that we pick an integer uniformly at random between 1 and the size of the list and then pick that many elements. This is not a terribly scientifically chosen approach but seems to work well).
  5. Replace the element in each index we selected with a clone of the original

Empirically this seems to do a very good job of solving the particular example it was designed to solve and does not harm the remaining cases too much (I also have an example which deliberately throws away duplicates. Believe it or not this sped that example up).

It’s unclear how useful this is in real practical tests, but it at the very least shouldn’t hurt and appears it would often help. The list generation code is fairly fundamental in that most other collection generation code (and later on, stateful testing) is built on it, so it’s worth putting in this sort of effort.

This entry was posted in Hypothesis, Python, Uncategorized on by .

How to make other people seem like humans

This is a technique that is by now so ingrained in how I think about things that it’s sometimes hard for me to remember that not only do normal people not constantly do this, it took me most of 30 years to figure out how to do it.

Do you frequently catch yourself treating your political enemies as if they are basically bogeymen who want to eat kittens? Do you frequently find yourself saying “I literally can’t imagine how anyone could think this way?”

You may be suffering from demonization: The tendency to believe that people who disagree with you are inhuman and fundamentally incomprehensible monsters.

You can keep doing this if you want, but personally I don’t recommend it. It’s both not a useful model of the world and given what a large proportion of the world probably disagrees with you, I imagine it’s quite stressful going around thinking they’re all fundamentally stupid and/or evil and are barely being restrained by society from chowing down to a nice bowl of kitten pops.

So if you find yourself unable to comprehend how someone could possibly hold a position, consider the following technique:

Take a set of things you care about. These can either be things you value, things you fear, or any mixture of the two. Now exaggerate some of them and downplay others.

I find moral foundations theory to be often useful here (I don’t know enough experimental moral philosophy to comment on its truth, but that’s not actually a required feature for this). For example, in order to understand conservative thinking I dial down care/harm a bit and dial up the other five axes.

Moral qualities are not the only dials you have to twiddle with. Trust of a particular group is often a good one too. e.g. anti-vaxxers become much more comprehensible when you consider that there are more than a few incidences in the 20th century of “We’re totes vaccinating you honest” medical experiments, and vaccination programs have not proven without ulterior motives in the 21st century either. It’s not hard to imagine distrusting the people who tell you that vaccines are OK, and dialling up the fear of harming your kids (Sanctity/degradation helps here too).

Generally speaking I rarely find a position so alien that I could not imagine myself holding it if my priorities were very different. Sometimes I have to really completely distort my priorities (I can just about stretch to understanding people who are against late term abortion, but people who are against early term abortion I basically have to start saying “Well if I believed this entirely wrong thing…”), but even then most positions are usually reachable.

It’s worth noting that the purpose of these mental gymnastics is neither to provide an accurate model of peoples’ beliefs, nor to come up with a reason that they’re OK. The fact that I have a somewhat better understanding of conservative morality than I used to does not make me significantly more inclined to be conservative, and the fact that I can somewhat understand the position of anti-vaxxers does not make any less inclined to think that they’re child murdering scum who should sent to jail (it turns out that even once you’ve thought of someone as entirely human with a coherent set of motivations you can still passionately hate them).

The purpose is to give you a mental picture you can work with, and start treating people as individuals to be engaged with, and if necessary dealt with, without treating them as caricatures. It’s a useful working principle for getting things done, not for acquiring a perfect understanding of someone’s motivations. Once you have engaged with them, you will probably find you acquire a more nuanced view of their actual motivations.

It’s also helpful for making me feel better about the world. I don’t know about you, but I find it nice to know that it’s not actually full of moustache twirling villains who are basically out to cause harm, but instead people with coherent sets of motivations that are different from your own.

Obviously you should feel under no compulsion to actually do this. This is intended as a useful technique, not a moral obligation. If you don’t feel comfortable or able to do this, I’m pretty sure I can understand your position.

This entry was posted in Uncategorized on by .

Hypothesis 1.0 is out

The following is the release announcement I sent to the testing in python mailing list and the main python mailing list:

Hypothesis is a Python library for turning unit tests into generative tests,
covering a far wider range of cases than you can manually. Rather than just
testing for the things you already know about, Hypothesis goes out and
actively hunts for bugs in your code. It usually finds them, and when it
does it gives you simple and easy to read examples to demonstrate.

Hypothesis is based on Quickcheck (
https://wiki.haskell.org/Introduction_to_QuickCheck2) but is designed to
have a naturally Pythonic API and integrate well with Python testing
libraries.

It’s easy to use, extremely solid, and probably more devious than you
are at finding
edge cases.

The 1.0 release of Hypothesis has a stable and well documented public API.
It’s more than ready for you to use and it’s easy to get started.

Full documentation is available at
http://hypothesis.readthedocs.org/en/latest/, or if you prefer you can skip
straight to the quick start guide:
http://hypothesis.readthedocs.org/en/latest/quickstart.html

It seems to be a hit – less on the mailing list, but the link to it is doing the rounds of the internet and has the dubious honour of being the first time in ages I’ve been on the front page of Hacker News.

This entry was posted in Hypothesis, Python, Uncategorized on by .

What I would do with the Stargate program

Advance warning: This post was written while not entirely sober and consists entirely of extremely nerdy overthinking of a weird 90s/early 2000s TV series. If you haven’t watched stargate you don’t care. If you have you probably still don’t care.

People seemed to like my competent Stargate season 1 and there was some demand for me to write a season 2.

I’m not massively keen on the idea. The problem is that it just starts to diverge too radically for my tastes. Season 1 I could more or less do along the lines of “I wonder how this would happen if they didn’t mess it up completely“? and basically play the initial conditions forward.

The problem is that at this point earth is in a much stronger position, all the rules and players are different, and I’d have to actually start making serious decisions about the universe and the consequences in it.

And at this point you run into the fact that the universe doesn’t actually make much sense. The rules are inconsistent and the simple fact is that the Goa’uld level of technology and industry makes absolutely no sense. You simply cannot perform the level of construction that they routinely perform with the industrial base they have (maybe this is addressed in some of the RPG materials? I haven’t read them).

So basically this all requires a level of universe building in order to rationalise it that a) Takes us significantly away from the core concept of “What would happen if people were competent?” and b) If I were going to do I’d actually write my own fiction instead.

I also don’t like that the requirement of competence basically means that Sam and Teal’C can’t be on SG1. Aside from the fact that I’ve removed everyone from the team who isn’t a white man, I really like them as characters and it would be a shame to not have them on the team.

On the other hand, I really cannot justify a competent version of the universe where the foremost expert on the Stargate and the person who is a hugely valuable source of intelligence and will routinely get you in trouble on every world you go to because he’s got “I am a bad guy” literally tattooed on his forehead.

So basically although I stand by the idea that I think this is how it would have played out, it doesn’t really put the story in a position I’d want to write about.

All that being said, I do have some interesting ideas…

This post is about one set of those ideas: Suppose you gave me an R&D budget and the ability to make strategic decisions. What would I do to the Stargate program?

All of this is stuff that could have been figured out fairly early on and mostly does not depend on detailed semantics of how the stargates work.

My plan basically consists of two complimentary parts:

Opsec

OK. So “seeing the thing someone else is dialling” is a known security vulnerability where you can totally figure out where someone who has gone through the stargate went. This happens literally all the time in the series.

Even if we’re not in competent verse where no-one in the Goa’uld knows the address for earth, we still don’t want the knowledge to proliferate.

So, rule 1 is: You never, ever, dial earth from an insecure location.

An insecure location in this case meaning “Any Stargate not enclosed in a building surrounded by soldiers”.

i.e. we establish off world bases today. The stargate program does not operate out of Cheyenne mountain. The stargate program operates out of multiple off planet bases. At least two, ideally three or more. Ideally you’d follow the “Stick the stargate deep in a mountain” approach that both earth and the alpha site use in the classic series, but for starters you’re still better off just sending a bunch of marines through the stargate and having them stick a tent over it. We can improve from there.

Obviously you never gate directly to any of these worlds either. That would be silly. You’d be revealing your bases.

No, your route home is always that you gate to one of the dozen worlds that you have been specifically tasked with memorizing and not allowed off world before you’ve passed a test proving you’ve memorized them. These are allocated to you at random from a set of worlds that have been explored and designated really really boring (ideally uninhabited, certainly with no permanent Goa’uld presence). You then gate to one of the established bases. If you need to get back to earth you gate to earth from there.

Obviously the gating to earth process is not automated and the coordinates are not stored anywhere.

Equally obviously, none of the people you regularly send on expeditions know the coordinates. I mean why would you do anything so stupid as sending someone who knows the address of earth into enemy territory?

As soon as is feasible, every offworld base is equipped with an ultrasound scanner. All incoming travellers are given an ultrasound scan to determine whether or not they carry a Goa’uld.

(Depending on whether we know about Cimmeria or not, and whether we’re in competentverse, anyone detected as infected is sent there. Obviously in competentverse we weren’t stupid enough to send Teal’C there and destroy Thor’s hammer as a result, so that’s still a viable solution)

As a result the chances of anyone discovering your off world bases is low (they’d need a watcher on the planet you were on) and the chances of getting to earth from there is even lower.

You also don’t gate out from earth to anywhere except one of these bases. This is less mandatory, but signals are bidirectional through a stargate, so if I were being security conscious I sure wouldn’t want to give people the ability to fire a beacon back through the world I gated to.

As a bonus, the offworld bases act as a firebreak. If, for example, we bring back a bomb, a plague, gate into a black hole, etc. worst case scenario is you lose the base and the couple of hundred people you’ve stationed there. You don’t lose your entire world of billions of people and the war to save humanity from enslavement.

At this point we’ve been minimally paranoid. Earth is probably not going to get invaded any time soon because they don’t know where we are. Now it’s time to proceed to the technical solutions that will be used to help us take the galaxy from the occupying force that are literally too incompetent to be allowed to live.

Technology to develop

All technology I’m going to suggest is technology that I believe could be developed by 90s era engineering without having to figure out alien super science. This has the advantage that it does not require any knowledge of the details of gate mechanics. All technology I’m interested in here is dumb, mechanical, and is not useful unless we can mass produce it for cheap.

There are basically three pieces of kit I consider it essentially mandatory to develop.

How fast and cheap can we build and install an iris?

In the series the answer is always “super fast, very  not cheap”. The super fast is ludicrously unrealistic and I’m going to assume that there’s lots of behind the scenes prep to which we are not privy because it makes for boring TV. The very not cheap is obvious when they’re talking about precision engineering and extremely expensive materials (some of them literally not available on earth). We don’t want to do that. We want to be making hundreds of irises. If we explore a planet and it has people and doesn’t currently have Goa’uld on it, we want to stick an iris there.

I’m not interested in “we have fancy dilating titanium shield which can withstand an arbitrary number of assaults”. I’m interested in “it might take us ten minutes to undo the iris but that’s OK because the gate stays open for half an hour, and if there’s a major assault we might have to replace it afterwards”. The point is not to be impervious to harm, the point is to have something we can easily install that will be vastly better than not having it there.

This is the sort of solution I’d like to do better than:

  1. Put the stargate on a hinged mount allowing it to rotate forward
  2. Place this arrangement on a concrete platform
  3. Tilt it slightly forward, suspended by cables on a winch
  4. When you get an incoming wormhole drop the stargate. It’s now flush with the concrete (if you’re feeling fancy you can even stick a concrete plint there which perfectly matches the stargate position). Sure, people might half rematerialize on the other side but they won’t actually be able to get even fully out of the wormhole. When the wormhole disengages there will be bits of them to clena up.
  5. If you get a valid GDO signal, winch the stargate up and signal when it’s ready for incoming travellers.

This solution is OK but it is too slow to install. I’m sure a team of military engineers can do better.

Portable dialling devices

It is well established that you can manually dial a stargate if you just provide it with energy. I want a device I can clamp to the side of a stargate with an electric motor and a power source  that can power a stargate for a couple minutes.

We want this for three reasons:

  1. Emergency procedure for when an SG team is stranded due to an issue with the DHD (this happens several times)
  2. Earth needs a DHD (sure, we have the antarctic one, but in competent!SG1 we never find that because why the hell would you dial back to earth when under heavy fire?) and we want a way to take one
  3. The following complete dick move that will bring the Goa’uld war machine to a stand still

Once we have the dialler, people will practice the following operation:

  1. Send a lovely handfull of flash bangs through the stargate
  2. Send an extremely grumpy set of marines through the stargate
  3. Kill or incapacitate every readily available guard
  4. Steal or destroy the DHD
  5. Take it back through the stargate with you
  6. Leave the autodialler behind to self destruct, leaving a pile of meaningless slag
  7. Giggle as you’ve just cut off a Goa’uld world from greater galactic civilization

How effective this is depends. The Goa’uld may or may not be able to build DHDs, but they can certainly ship them in from other places. This will take time and ships, and basically have them running around wasting most of their ships on transit power. In the meantime you have the ability to lock down a substantial proportion of their forces.

Automated DHD covers

The DHD is in almost all ways 100% better than the system that earth has built. It’s easier to use, faster to dial, draws less power, etc.

There’s one way in which it is much worse: It is entirely manual, with no capacity for automation.

Oh, in canon inside it there’s all sorts of fancy computing going on inside it and the stargate network that people eventually figure out how to use. We don’t need any of that.

We’re going to stick a physical cover on top of it with motors that can push each button. We’ll use what we’ve already figured out about the stargate to tap into some basic diagnostics, but worst case scenario we can probably get by with “how much energy is currently flowing through the thing”.

There are a number of use cases for this, most notably that it lets us handle our own infrastructure much better. It also gives us much faster dialling, which is a significant tactical advantage.

It can also be used offensively. Oh so offensively.

Basically, the Goa’uld are going to figure out irises at some point. We can only steal so many DHDs in surprise raids before they realise that maybe they should take a leaf from our book and start blocking incoming travellers.

They’ll probably do something fancy and hard to mass produce like a force field.

Once this start being widespread, we giggle at how annoyed the pompous little megalomaniacs are going to be and turn on the war dialler program.

You see, all you need to permanently shut down a stargate in a way that makes it permanently inoperable is two stargates and good automation. I think with decent scheduling you can shut down n stargates with n + 1.

Here’s how it works:

A stargate can’t dial out when there’s an incoming wormhole. The only defence against this is to dial out faster than your opponent can dial in.

So you dial in and set a timer. Meanwhile your second stargate dials the first 6 symbols. As close to exactly as you can make it, when the timer goes off the first stargate shuts down the wormhole and the second stargate presses the seventh symbol. Voila, new incoming wormhole faster than you can ever dial out.

And what do we do with all this?

Oh that’s policy. I don’t set policy.

But on this front I’m mostly in agreement with canon. Starting with the currently most annoying system lord and killing your way downwards seems like a pretty good strategy.

This entry was posted in Stargate, Uncategorized on by .