Author Archives: david

Coin tossing and adversary resistant priors

Hello, it’s your old friend the genie who likes to reify implausible thought experiments!

Here’s a giant pile of gold. Want it? Good. Congratulations! It’s yours!

But before you go, lets play a game.

The game works as follows: Pick a coin from the pile and toss it. If it comes up heads, I’ll give you a wish. If it comes up tails, the coin disappears in a puff of smoke.

The wishes are pretty great. There are a few provisos, but it’s still a pretty sweet deal. Certainly you could wish for more gold than is in that pile over there, but there’s other way cooler things you can wish for.

You don’t get to resume the game though. Once you decide to stop playing it you’ve stopped and all you have to show for it is however many coins you have left.

Got it? Good. Let’s begin.

You pick a coin off the pile and toss it. It comes up tails. You pick another one. Tails.

Tails, tails, tails, tails, tails.

How many coins do you toss before you stop playing?


The game was obviously always going to be rigged. The genie doesn’t care about physics so it can bias a coin if it wants to.

The only question is how rigged?

Let’s also assume that the utility value of a wish is a lot greater than the utility value of the whole pile of gold. Say 10 times greater, or 100 times. It doesn’t really matter.

The conjugate prior for a coin tossing model is some proper Beta distribution. Say \(\mathrm{Beta}(\alpha, \alpha)\).

Under this model unless we’re very careful with our choice of \(\alpha\), by choosing the probability adversarially, the Genie can force either one of two things to happen:

  1. You walk away from a perfectly reasonable chance of getting a wish.
  2. You spend all your money without ever getting a wish.

Suppose you’ve tossed \(n\) coins and still not seen a heads. Your posterior distribution is now \(\mathrm{Beta}(\alpha, \alpha + n)\). This means that your subjective probability that your next coin toss will come up heads is the expectation of this distribution, which is \(\frac{\alpha}{\alpha + n}\).

In particular if \(W\) is the utility value of a wish as a multiple of the utility value of a single coin, you will keep tossing coins as long as \(W \frac{\alpha}{\alpha + n} > 1\), or equivalently as long as \(n < \alpha (W – 1)\).

So say we want to keep \(m < n\) of our coins in this case. Then we want \(m \geq \alpha (W -1)\), or \(\alpha \leq \frac{m}{W – 1}\).

If \(W \gg n\) (the wish is much more value than the pile) then \(\frac{n}{W – 1} \ll 1\), so unless \(\alpha\) is very small you will spend all your money when given a completely rigged coin.

Now suppose we are given an unbiased coin that just happens to turn up tails the first time. If \(\alpha\) is too low we immediately go “WTF hax this game is rigged” and bail out. It would be nice to not walk away from the wish with 50% probability given a perfectly normal coin.

So suppose we don’t want to stop in our first \(k\) coins. We might choose, say, \(k = 20\) so that the probability of that happening on an unbiased coin is only about 1 in a million.

So from this we want the opposite condition. So our constraints are \(\frac{k}{W – 1} \leq \alpha \leq \frac{m}{W – 1}\).

i.e. our \(\alpha\) is entirely constrained to lie in a range defined by what we’re prepared to waste in adversarial conditions.

But this means the Bayesian approach has told us basically nothing! We might as well just pick a number of coins that we’re prepared to sacrifice up front and then just play to that point.

The problem is that we’ve chosen the prior without concern for the fact that the probability might have been adversarially chosen, so as a result it doesn’t help us much with those conditions.

It’s also more than a little bit odd to try to control our confidence in a prior based on the value of the wish and our behaviour under circumstances that the prior doesn’t really account for well at all.

So lets choose a different prior. It won’t be conjugate to the problem, but that’s OK. It’ll just make calculations slightly fiddlier.

Lets assign some non-zero probability \(q\) to the notion that the genie is just a dick and has given us a rigged game that we never win. Our prior is now a mixture of two models: Under the first, the coin never shows heads. Under the second, the behaviour is the same as before and it has a probability of heads drawn from a beta prior. The probability of the first is \(p\). Call the first event \(R\) and the second \(U\).

Under a uniform prior the probability of seeing \(k\) tails in a row is \(\frac{1}{1 + k}\). Under the rigged model it is \(1\).

After observing \(k\) tails in a row the probability of observing a head on the next throw is now \(\frac{1}{k + 1}\) times the probability that the game is not completely rigged.

An application of Bayes rule gives us that the probability that the game is not rigged is \(\frac{1 – q}{qk + 1}\), so some rearrangement gives us that we keep going for as long as \(q n^2 + (1 + q) n + 1 < W (1 – q)\).

Some blatant dropping of terms and fiddling to get a reasonable estimate means that we stop definitely before \(n = \sqrt{ W (\frac{1}{q} – 1)}\).

So say we assign a 1% probability that the genie is out to screw us. Then we want to stop when we’ve spent roughly \(10 \sqrt{W}\) coins.

This seems like a much more sensible prescription for dealing with a genie than a pure uniform model. Sometimes you’ll still end up spending all your coins, mind you: If \(W > \frac{1}{10} n^2\) (i.e. wishes are very very valuable or there aren’t that many coins), you’ll keep playing until you run out of coins, but that doesn’t seem like an unreasonable decision. In general though you’ll walk away with a significant amount of your wealth and, if there was ever a reasonable chance of you doing so, a wish too.

I don’t have much of a conclusion beyond that other than that priors should probably take into account a non-zero probability of an adversarial choice when there is a non-zero probability of an adversarial choice. I just thought the model was interesting.

@pozorvlak suggested that this might have interesting implications for the problem of adversarial examples in machine learning. I can imagine that being true – e.g. you could assign some plausibility score of an item under the original model and reject things that look too implausible earlier – but I don’t know much about the problem domain.

This entry was posted in Decision Theory on by .

Rigging elections with integer linear programming

No, this isn’t a post about politics, sorry, it’s just a post about voting theory.

As you might have noticed (and if not, this is the post announcing it), I have a book out! It’s called Voting by Example and is about the complexities of voting and why different voting systems might be interesting. You should go buy it.

But this isn’t just an ad post for the book, it’s about something else: How I came up with the examples in this book.

At its core is the following example. We have a student election where there are four candidates are running for class president. Each student casts a vote ranking the four candidates in order of most to least preferred.

The votes are as follows:

  • 32% of students voted Alex, Kim, Charlie, Pat
  • 27% of students voted Charlie, Pat, Kim, Alex
  • 25% of students voted Pat, Charlie, Kim, Alex
  • 16% of students voted Kim, Pat, Alex, Charlie

The point of this example is that each of four different ranked voting systems give a different answer:

It’s also constructed to have a minimal number of voting blocs, though it’s minimum amongst a slightly more specific set of elections than just those satisfying the above conditions.

Roughly what this means is:

  • Alex has the most first choice votes
  • When compared pairwise with any other candidate, the majority prefer Charlie
  • Pat wins a complicated procedure where people iteratively drop out depending on who is the favourite of the remaining candidates amongst the fewest voters
  • If you give people a score of 3 for each voter who ranks them first, two for each who ranks them second, and one for each who ranks them third, then Kim has the highest score.

If you want to know more than that I recommend the relevant Wikipedia links above, which are all very approachable.

The significance of this is that each of these is a popular (either in practice or amongst electoral theorists) way of saying who should be the winner. So the situation ends up being rather complex to decide.

But this isn’t a post about the significance. This is a post about how I constructed the example.

Historically I’ve generally created example elections with Hypothesis or a similar approach – randomly fuzzing until you get a result you want – but that wasn’t really going to work very well here due to the rather complicated set of constraints that are hard to satisfy by accident.

So I instead turned to my old friend, integer linear programming.

The idea of integer linear programming (ILP) is that we have a number of variables which are forced to take integer values. We can then impose linear constraints between them, and give a linear objective function to optimise for. For example we might have three variables \(v_1, v_2, v_3\) and the following constraints:

  • \(v_1, v_2, v_3 \geq 0\)
  • \(v_1 + 2 v_2 + 3 v_3 \leq 5\)

And then try to maximise \(v_1 + 2 v_2 + 4 v_3\).

Given a problem like this we can feed it to an ILP solver and it will spit out a solution. If we do that, it will tell us that an optimal solution for this problem is \(v_1 = 2, v_2 = 0, v_3 = 1\).

A lot of problems can be nicely turned into ILP problems, and there are some fairly good open source ILP solvers, so it’s often a nice way to solve complex combinatorial problems.

So how do we turn our election rigging problem into an integer linear program?

The key idea is this: 4! isn’t very large. It’s only 24. 24 is tiny from an ILP point of view.

This means that there’s no problem with creating a variable for each of the possible votes that someone could cast, representing the number of people who cast that vote. That is, we create integer variables \(v_p\) indexed by permutations with the constraints:

  • \(\sum v_p = 100\)
  • \(v_p \geq 0\)

Then the value of e.g. \(v_{(0, 1, 2, 3)}\) is the number of people who cast the exact vote \((0, 1, 2, 3)\) (or, by name, Alex,, Charlie, Pat, Kim).

We then use a trick to get nicer examples, which is that we try to minimise the number of non-zero votes. The idea is to create variables which, when minimised, just look like markers that say whether a vote is non-zero or not.

So we create supplementary 0/1 valued integer variables \(u_p\) with the constraints that \(v_p \leq 100 u_p\), and set the objective to minimise \(\sum u_p\). Then \(u_p\) will be set to \(0\) wherever it can be, and the only places where it can’t are where \(v_p\) is non-zero. Thus this minimises the number of voting blocs.

So that’s how we create our basic ILP problem, but right now it will just stick 100 votes on some arbitrary possible ballot. How do we then express the voting conditions?

Well, lets start with the plurality and Borda scores. These are pretty easy, because they constitute just calculating a score for each candidate for each permutation and adding up the scores. This means that the scores are just a linear function of the variables, which is exactly what an ILP is built on.

Victory is then just a simple matter of one candidate’s score exceeding another. You need to set some epsilon for the gap (linear programming can’t express \(<\), only \(\leq\)), but that’s OK – the scores are just integers, so we can just insist on a gap of \(1\).

The following code captures all of the above using Pulp, which is a very pleasant to use Python interface to a variety of ILP solvers:

The idea is that the additive_scores parameter takes a list of scoring functions and a partial list of winners given those functions and returns an election producing those orders.

So if we run this asking for the plurality winners to be (0, 1, 2, 3) and the Borda winners to be (3, 2, 1, 0) we get the following:

>>> build_election(4, 100, [
    (lambda p, c: int(p[0] == c), (0, 1, 2, 3)),
    (lambda p, c: 3 - p.index(c), (3, 2, 1, 0))])
[((0, 3, 1, 2), 35), ((1, 2, 3, 0), 34), ((2, 3, 0, 1), 31)]

So this is already looking quite close to our starting example.

Creating a Condorcet winner is similarly easy: Whether the majority prefers a candidate to another is again just an additive score. So we just need to add the requisite \(N\) constraint that our desired Condorcet candidate wins.

if condorcet_winner is not None:
    victories = {
        (i, j): lpsum(
            v for p, v in variables if p.index(i) > p.index(j) )
            for i in candidates
            for j in candidates
    }
    for c in candidates:
        if c != condorcet_winner:
            problem.addConstraint( victories[(condorcet_winner, c)] >=
                victories[(c, condorcet_winner)] + 1
            )

If we run this to force the Condorcet winner to be \(1\) we now get the following:

>>> build_election(4, 100, [
    (lambda p, c: int(p[0] == c), (0, 1, 2, 3)),
    (lambda p, c: 3 - p.index(c), (3, 2, 1, 0))],
    condorcet_winner=1,
)
[((0, 3, 1, 2), 28),
 ((1, 2, 3, 0), 27),
 ((2, 1, 3, 0), 24),
 ((3, 2, 0, 1), 21)]

This is pretty close to the desired result. We just need to figure out how to set the IRV winner.

This is a bit more fiddly because IRV isn’t a simple additive procedure, so we can’t simply set up scores for who wins it.

But where it is a simple additive procedure is to determine who drops out given who has already dropped out, because that’s simply a matter of calculating a modified plurality score with some of the candidates ignored.

So what we can do is specify the exact dropout order: This means we know who has dropped out at any point, so we can calculate the scores for who should drop out next and add the appropriate constraints.

The following code achieves this:

    if irv_dropout_order is not None:
        remaining_candidates = set(candidates)
        for i in irv_dropout_order:
            if len(remaining_candidates) <= 1:
                break
            assert i in remaining_candidates
            allocations = {j: [] for j in remaining_candidates}
            for p, v in variables:
                for c in p:
                    if c in remaining_candidates:
                        allocations[c].append(v)
                        break
            loser_allocations = sum(allocations.pop(i))
            remaining_candidates.remove(i)
            for vs in allocations.values():
                problem.addConstraint(loser_allocations + 1 <= sum(vs))

And running this we get the following:

>>> build_election(4, 100, [
    (lambda p, c: int(p[0] == c), (0, 1, 2, 3)),
    (lambda p, c: 3 - p.index(c), (3, 2, 1, 0))
], condorcet_winner=1, irv_dropout_order=(3, 1, 0, 2))
[((0, 3, 1, 2), 31),
 ((1, 2, 3, 0), 27),
 ((2, 1, 3, 0), 25),
 ((3, 2, 0, 1), 17)]

This isn’t quite the same example in the book (the first bloc has one fewer vote which the last bloc got in this), because the book example had a bit more code for optimizing it into a canonical form, but that code isn’t very interesting so we’ll skip it.

Here’s the full code:

I’m constantly surprised how nice integer linear programming ends up being for constructing examples. I knew I could do the Borda and plurality scores – that’s in fact the example that motivated me trying this out at all – but although I find it obvious in retrospect that you can also fix the Condorcet winner it definitely wasn’t a priori. The fact that it’s easy to also calculate the IRV dropout order was genuinely surprising.

This is also a nice example of how much fun small n programming can be. This isn’t just an O(n!) solution – it generates a problem of size O(n!) and then feeds it to a solver for an NP-hard problem! in principle that should be catastrophic. In practice, n is 4, so who cares? (n=4 is about the limit for this too – it just about works for n=5, and for n=6 it doesn’t really).

I also find it somewhat surprising how much freedom there is to get different results from different voting systems. I was motivated to do some of this by The ultimate of chaos resulting from weighted voting systems, so I knew there was a fair bit of freedom, but I was somewhat expecting that to be a pathology of weighted voting systems, so I’m still a little surprised. I guess there’s just quite a lot of room to play around with in a 24-dimensional simplex.

Which brings me to my final point: When you’re trying to understand what is possible in a domain, this sort of example generation focused programming is very useful for exploring the limits of the possible and building your intuition. I feel like this isn’t a thing people do enough, and I think we should all do more of it.

I also think you should buy my book.

(Or support my writing on Patreon)

This entry was posted in programming, Python, voting on by .

Programmer at Large: How do you do?

(This is the latest chapter in my serial novel, Programmer at Large. You can read previous chapters most easily at the Archive of Our Own mirror).

The gym has a number of main rooms branching off a central corridor for the different exercise areas – resistance machines, a pool, a couple sports rooms for various games, etc.

I normally prefer to just swim, with a little bit of resistance for strength, but the big thing that everyone has to do, and almost nobody likes, are the rings – two counter-rotating sections we can spin up or down to get almost any level of fake gravity we desire.

Sure, Crew don’t need gravity to be healthy – even most grounders don’t these days – but it definitely helps, and it’s essential if you ever want to make it to the surface of a planet – so all exercise regimes mandate a certain amount of time in gravity no matter how much we hate it.

When I arrived they had already been set for 15 meters per second squared, which is on the high side – records suggested our destination planet was only 12, and I normally exercise at 9, but the analysis said that was perfectly safe for me and probably even beneficial, and I didn’t care enough to put in a bid to change it.

I strapped in to an available chair and transited to the ring, grunting slightly under the acceleration. It’s not that I can’t handle this level of gravity, it’s just that gravity always comes as a shock when you first enter it.

Still, nothing for it. I got up and started to warm up for some more serious exercise, and after a few hundred seconds it was time to get started properly. I broke into a run.

About a third of the way around the ring I saw someone doing callisthenics by the side of the track. I waved to them in greeting but didn’t stop running.

The ship’s computer is not an AI. You can tell this from subtle signs like the erratic conversational interface, the way it sometimes fails to make simple inferences when you ask it questions, and the lack of crew with cutting torches swarming all over the ship looking for where it keeps it core.

I mention this because although it’s obviously good that the ship is not an AI, it causes a number of problems. In particular it’s much less satisfying to call it an underhanded waste of space grounder when it pulls something like this.

The other person on the track was Kimiko, the one who had filed the bug report that I was working on. I’d expressed a vague interest in socialising with them when I thought they were safely asleep for the foreseeable future, and the ship decided to throw us together.

While also getting in my mandatory exercise. I’m sure some algorithm thought that this was very efficient.

It’s a bit rude to ignore someone while you’re at the gym. Not unconscionably so, but slightly churlish. Obviously we’re not going to socialise while we’re actually exercising, but the expectation is that rest breaks between exercises are a time to socialise. Given that Kimiko is someone it would be good for me to talk to on top of that, I probably couldn’t get away with avoiding them.

On the plus side, that was good incentive to keep running, so I made it a full five circuits of the ring (nearly 8k!) in a bit under 2 ksec. Not a personal best by any stretch, but I also don’t usually run in this stupid high gravity.

Kimiko was also taking a rest when I finally stopped, so I flopped down next to them.

They were… a bit funny looking. There’s a pretty wide range of body plans among the Crew, but there’s a certain Crew look that you get used to and they didn’t have much of it. They were on the tall side – about 1.63m – with weirdly pale skin and some sort of… I guess it must be hair on their face.

We sat in what I assured myself was entirely non-awkward companionable silence for a few tens of seconds while I caught my breath, but I eventually I’d recovered enough and broke the silence.

“I thought you were asleep.”

They gave me a quizzical look.

“Uh, OK?”

“Sorry. That didn’t come out right at all. Let me start again. I’m working on a bug you reported – the one with the weird sound in the walls – so I’d checked if you were awake to ask about it and you weren’t, so I was surprised when I saw you here.”

“Oh! Right! Thanks for looking at that. I wasn’t sure if anyone would bother, but you know what they say – an unreported bug is always critical.”

I nodded. A much better attitude than certain dead crew members I could mention.

“Well, nobody over in plumbing proper is going to look at it I expect but I mostly get to work on whatever I like, and i like tracking down weird ghosts in the machine and am good at plumbing, so here I am.”

“So what did you want to ask me?”

I waved a hand to cut off that line of conversation.

“I’ll have to do that later, sorry. I’m not allowed to work right now.”

“Ah, mandatory downtime. Sorry, I should have noticed.”

“No problem.”

Naturally enforced downtime is part of your primary information. It’s important for other people to be able to see you’re being naughty and overworking. Sometimes I hate all this nosy software.

“Anyway, to answer your question, I was asleep, but they woke me to deal with this yeast contamination problem. I was one of the primary engineers when we brought in this strain. They’ve been having a bunch of problems with it, and wanted some help getting to the bottom of it. It’d be a shame to ditch it – it’s nearly 1% more efficient than the strain it replaces! – but if it’s going to keep doing this…”

“Right. Makes sense. So how’s it going?”

“Beats me. I’ve only been properly awake for about 10 ksec. I’ll get to work after I’ve recovered from the gym and had a short nap. We’re replacing it with a more stable strain for the next batch from that vat, so there’s no rush.”

I nodded. I’d be keen to get started, personally, but this sort of relaxed attitude was much more sensible for interstellar work – there would be plenty of urgent things when we got in system, and there might be before then, but if there wasn’t any actual urgency then why stress yourself by creating a false one?

We lapsed into silence again. Eventually I broke it.

“So, um. If you don’t mind me asking, what’s with the…?”

I gestured vaguely around my face.

“What? The beard? I had it grown in a planet side mission a while back, and I decided I liked it, so I kept it.”

“OK but why?”

“I dunno. It just… felt right.”

“No, sorry, I mean why did you need to grow it for a mission?”

“Oh, huh. You mean you’ve never even encountered beards?”

“Not really. I mean I guess I might have seen them before in pictures, but I don’t have the concept.”

“So you haven’t watched Lesbian Space Pirates?”

“A bit? I’m not very into it.”

“But Lesbian Space Pirates is hilarious!”

I shrugged helplessly. I’ve yet to find a non-awkward way to explain that misunderstandings of your culture are only funny if you actually fit comfortably into that culture in the first place.

They took a deep breath.

“OK. Have you heard about gender?”

(If you liked this and want to read more like it, support my writing on Patreon! Patreon supporters get access to early drafts of upcoming chapters. This chapter is also mirrored at archive of our own. Also, entirely unrelated, but I have a book out now! It has absolutely nothing to do with programmers or space ships and very little to do with gender, but is instead about voting systems).

This entry was posted in Fiction, Programmer at Large on by .

Thinking through the implications

There’s a concept called the goes by various names and forms. I learned of it as Maslow’s Hammer, but apparently it’s more commonly known as the Law of the Instrument.

These two forms go as follows:

“Give a small boy a hammer, and he will find that everything he encounters needs pounding.” (the Law of the Instrument)

“if all you have is a hammer, everything looks like a nail” (Maslow’s hammer).

I have mixed feelings about the concept. On the one hand, it’s undeniably true. On the other hand, I’ve never liked the degree to which it feels pejorative. If you have a hammer, solving your problems by hitting things is a great way to learn about the different applications of hammers, which are many and varied.

Imagine being an early human and encountering the concept of a hammer for the first time. Maybe you’ve just used it to crack a nut.

Do you:

  1. Say “Oh, cool. A better means of cracking nuts. I like nuts” and go on to fill your belly with lots of delicious nut meat.
  2. Think “Oh my god this changes everything”, think up a dozen other applications to tasks that would previously have been hard to impossible, and take the first step in a long journey that begins with basic tool making and continues straight on through some distant future descendant sitting in front of a computer typing these words.

I suspect quite a lot of people did the first before somebody did the second.

Hammers are the precursor tool that lead to all other things. They give you the ability to exert greater pressure on things than you safely can on your own. You can crack nuts and shells, but you can also grind things, you can pound staves into the ground, and most importantly you can make other tools by shaving off fragments of flints to produce blades. Hammers are amazing and I won’t hear a word against them.

But there’s another precursor tool that is more important yet, which lives inside your head. This is the ability to generalise, and with it to consider the idea of a hammer as a general purpose tool with multiple applications. Plenty of animals have hammers. Very few of them use their hammers for more than a few specialised purposes, let alone to produce blades and other tools.

The most obvious example of failure to generalise I’ve ever lived with is Dexter.

Dexter is a cat who is rather confused by a printer

Photo by Kelsey J.R.P. Byers

Meet Dexter.

Dexter is simultaneously very clever and very stupid: He is clever because he figured out how to open the door to my flatmates’ bedroom. He jumps up, braces himself on the door handle, pulls it down with his body weight, and then lets the momentum push the door open. If the door isn’t locked it works very well (if the door is locked, he just repeats it until people are annoyed enough by the noise to unlock it).

But he is very stupid because there are  four other doors in the flat that he is sometimes cruelly and unjustifiably trapped on the wrong side of, and he has absolutely no idea what to do about this. One of the other doors is  literally identical in configuration to the one he could open, but it’s a different door, so it must have some fundamentally different magic spell required to open it.

As far as I know he has never even tried his door opening technique on any of the others. If he wants through to the other side, he just sits in front of the door and whines until someone lets him in (this is a much less effective technique).

The problem with Dexter (well, one problem. He has so many) is that although he had figured something out he lacked the ability to think through the implications and apply it more broadly.

In contrast, the problem with a lot of people when solving problems is not that we can’t generalise, it’s just that we don’t.

We learn how to use a hammer to crack a nut, and then we go “Oh, cool. Now I can crack all these other nuts in front of me much more efficiently” and keep on cracking nuts. We open the door, and now we have access to the nice warm bed and are beyond caring about all those other doors we could open.

This happens all the time to everyone – I notice it in daily life, in software development, in mathematics. Any area where you regularly solve problems (which is, to a first approximation, every area) has tools you could be using that people are routinely miss how to generalise.

One reasons this crops up so often is that it’s something of a limitation of learning while doing. If you’re learning because of the problem in front of you, it’s tempting to stay very focused on the problem in front of you and not think through the broader implications of what you’ve learned.

The solution is instead to be the child with the hammer. You don’t have to stop what you’re doing to do it, but once you’ve learned a new technique, spend some time to go around hitting things with it.

Be careful what you hit – some things are fragile – but as long as you’re in a place where it’s safe to experiment then you should. It will give you solutions you might not have thought of, and significantly refine your understanding of what you’ve learned by seeing how it fits into a broader context.

I tend to do this mostly automatically (or, at least, I notice myself doing this a lot. There might be whole categories of things I learn and then forget to do this with), which means I don’t have very good habits to suggest for how to do it, but here are some things I think might help remember to do this:

  • When learning by doing, write down interesting things you learned so you remember them later explicitly.
  • When you learn new things, try to come up with at least one application of them outside the context you’re learning them in.
  • When you encounter a new problem, see if anything you learned recently applies to it.
  • When you re-encounter a problem that you previously believed was hard or impossible, spend at least 5 minutes thinking about why it is hard or impossible and if anything you’ve learned since then changes that.

I don’t know how well these work when you do them explicitly, but they should at least be better than not trying them.

Regardless though, try to keep the key idea in mind: When you learn about new things, ask how else you can use them. Think through the implications.

(Did you know that the #1 way to improve your thinking skills is to donate to the Patreon for my writing? Note: This is false. You probably already knew that. But it might result in more posts like this, and it will make me very happy. You’ll also get access to draft posts when they’re written. This one has been in the queue for almost a month).

(Also, I have a book out. It’s about voting systems. If you like my writing, you should go buy my book).

This entry was posted in Open sourcing my brain on by .

Avoiding conferences in the USA for now

This is an administrative note, with a side order of politics.

Although I am not personally affected to any large degree (I’m a US citizen, and while there are theoretically things that could cause me problems, in practice I think by the time they come for me you’ll care more about how to join the violent resistance than any particular effect of this post), the situation with Trump in the US and his rather dangerously slapdash approach to things like immigration and the basic rule of law are making me extremely unhappy, and I don’t really feel comfortable attending US conferences while this is going on.

I was probably going to be going to PyCon US (and had submitted a tutorial proposal which I’ve now withdrawn), and was quite happy that PyCon UK and Strange Loop did not clash this year, as I keep wanting to give a talk about Hypothesis implementation at the latter (which might or might not have been accepted of course, but I’d have at least tried), so amidst the rather larger and more important political horror this is also personally quite annoying. Oh well. Bigger things to worry about.

Naturally I hold neither of these conferences accountable for Trump’s actions, which are almost entirely beyond their control, and I wish them all the best. I just don’t feel comfortable attending myself.

I won’t entirely be avoiding travel to the US (I have family over there), but I will be doing my best to minimise it, and will probably not do so for work purposes unless it’s something really important.

I don’t know how long I intend to keep this policy up, but it’s almost certainly going to be for the next four years unless something very surprising happens.

Lest this seem like there’s an awful lot of glass in this house I’m throwing stones from, I’d probably recommend you do the same to the UK. The best I can claim right now is that we’re not yet as bad as Trump, but we already have a history of really awful border control so that’s probably not very reassuring. That being said, I live here, so I won’t make any such attempt to avoid local conferences.

Apologies to everyone who I won’t be able to see and/or meet for the first time in person at PyCon US and Strange Loop. Maybe we can find a great conference somewhere in the EU to both attend?

Assuming they still let Brits in after this debacle we’ve made of things.

This entry was posted in Admin, Python on by .