Author Archives: david

When does one normal distribution dominate another?

Continuing the study of the dominance relationship I defined previously (and only I care about) I thought to ask the question “When does one normal random variable dominate another?”. The answer is very easy to work out, but I found it surprising until I actually did the maths.

Theorem: Let \(A \sim \mathrm{Norm}(\mu_1, \sigma_1^2)\), \(B \sim \mathrm{Norm}(\mu_2, \sigma_2^2)\). Then \(A \preceq B\) iff \(\mu_1 \leq \mu_2\) and \(\sigma_1 = \sigma_2\).

Note the equality in the second part: Given two normal distributions with different variance, neither will dominate th e other.

Proof:

Let \(G(t) = P(Z \geq t)\) where \(Z \sim \mathrm{Norm}(0,1)\). Then \(P(A \geq t) = G(\frac{t-\mu_1}{\sigma_1})\), \(P(B \geq t) = G(\frac{t-\mu_2}{\sigma_2})\). \(G\) is strictly decreasing, so \(P(A \geq t) \leq P(B \geq t)\) iff \(\frac{t-\mu_1}{\sigma_1} \geq \frac{t-\mu_2}{\sigma_2}\) iff \(\left(\frac{1}{\sigma_1} – \frac{1}{\sigma_2}\right) t \geq \frac{\mu_1}{\sigma_1} – \frac{\mu_2}{\sigma_2}\).

Because the left hand side is linear in \(t\), this can only be satisfied for all \(t\) if the coefficient is 0. i.e. if \(\sigma_1 = \sigma_2\). In this case it is satisfied iff \(0 \geq \frac{\mu_1}{\sigma_1} – \frac{\mu_2}{\sigma_2} = \frac{\mu_1 – \mu_2}{\sigma_1}\) i.e. iff \(\mu_1 \leq \mu_2\). Running this backwards, if \(\mu_1 \leq \mu_2\) and \(\sigma_1 = \sigma_2\) then this inequality is always satisfies and thus \(A \preceq B\).

QED

Like I said, very straightforward algebra, but a little surprising as a result. I wasn’t thinking about the lower tail, so I expected there to be cases where a lower mean lower standard deviation was dominated, but it turns out not.

This entry was posted in Numbers are hard on by .

Come join the cult of the giant global brain

Advance warning: I’ve had to rescue this post from the eternal draft pile of doom not once, not twice, but three times now. As such I’m publishing it in a slightly less polished state than I’d like it to be in to stop me from dropping it yet again. Corrections and requests for clarification welcome.

After my post about how I think I can fly convinced all of you that I should be taken entirely seriously, I figure it’s the time to air another of my weird beliefs and persuade you all to join my bizarre, cultish, religion.

Err. Wait, the other thing.

This is basically a post about how everything is, like, connected man. It’s all one giant, global, brain, woah.

No, the other other thing.

This is a post about how intelligence is perhaps a much more fluid concept than we give it credit for, and that if you believe in AI then you should probably believe in all sorts of other non or not-entirely human intelligences too.

Actually this is something I started writing a while ago, and it got a bit out of control and ended up in the eternal drafts folder. I was reminded of it and figured it might be worth dusting off and extracting an interesting chunk of it (by rewriting it from scratch).

This is a post about something which I genuinely believe, not a thought experiment. It’s perhaps as close as I get to religion, although that’s not very close. I mean, sure, I am about to attempt to prove to you logically the existence of a supreme being, but not in a way that I think is incompatible with atheism. It’s not completely unrelated to my points about the fluidity of identity, but I don’t plan to make any explicit connection between the two in this post

Consider the Turing test…

The idea is simple. You’re talking to a computer. It has to convince you that it’s a human being. If it succeeds, you must concede that you are at least as confident that it is a sentient being as you are of other humans.

The purpose of the Turing test is to explore the boundaries of what we regard as intelligence. It is not intended to be a working definition of intelligence (it is entirely possible that there are entities which we should regard as intelligent which nevertheless think entirely differently from human beings. Many of them may even live on earth). It is a sufficient condition for intelligence, not a necessary one.

There’s an interesting test case for whether we should believe the results of the Turing test called the Chinese room experiment.

The idea is this: We have the source code for an AI who speaks Chinese. I do not speak Chinese. I am locked in a room with this source code and enough pencil and paper to function as workign memory, and someone passes in letters written in Chinese. I must manually run the program for the AI to respond to these letters.

The experiment is intended to demonstrate that the AI does not “truly” understand Chinese because I am the person doing all the actions and I do not understand Chinese.

I disagree with this conclusion.

The conclusion I draw from this thought experiment is that (conceptually at least. In practice I assume this would be impractical) you can build an AI out of a human, paper and pencil.

I’d like you now to consider another variation on the Chinese room experiment. Rather than locking me in the room with the instructions for a Chinese AI, you lock me in the room with a skilled interpreter of Chinese to English.

We can certainly imitate a person who speaks Chinese – the interpreter is a person who can speak Chinese! All they need to do to achieve that is to not talk to me.

But we can also do more than that. Unless this is one really exceptional interpreter (let’s assume that they are not) there are things I know more about than they do – mathematics, software development, the works of Hugh Cook, whatever. They can ask me questions about that.

So we can probably also do a fairly convincing imitation of someone who knows both a reasonable amount of mathematics and speaks Chinese. Depending on how the interpreter and I interact, this is unlikely to be literally “Me + I can speak Chinese” or “Them + they know maths”. It’s very much somewhere in between.

So you can build an AI out of two humans. This may seem like an uninteresting thing to do – you started off with two intelligent beings and you’re using them to simulate one.

But what’s interesting is that this AI we’ve built has a set of skills that neither of us can manage on our own. It’s a bit like what I’ve said about the effects of teamwork before (disclosure: I totally had the theme of this post in the back of my mind when I wrote that one).

This would also work if we were in separate rooms and communicating by letters. It would differently, and possibly less well, because of the details of how actual people interact and the problems with delays in communication, but it would still work.

What am I getting at here?

My point is this: If you have two entities which the Turing test forces you to regard as intelligent, you can put them in to a configuration that the Turing test forces you to regard as intelligent, and that entity will behave interestingly differently from its constituents.

And you can of course chain this. If you have three people, put them in a line of Chinese rooms where each can communicate only with its immediate neighbours and you can only communicate with one of the ends of the line. That person can ask questions of the person next to them, who can in turn pass questions further down the line.

This will of course cause a massive slow down and fidelity loss. If I’m sitting third in the line and between me and our Chinese interpreter there’s someone who only speaks French and, while they are a very good student of 19th century religious poetry, isn’t particularly strong at mathematics, they’re going to be a bit of a barrier to any mathematical questions the interpreter wants to ask me about functional analysis (my French is nicht so bien, even if our interpreter also happens to speak French).

But other configurations are also possible. For example there could be a fan shape where the interpreter can talk to any either of us and consult us to ask questions. Then we genuinely could present as a Chinese speaking individual with extensive knowledge about both mathematics and 19th century religious French poetry. This then works perfectly well no matter how many people we add in (although at some point I expect our spokesperson will start to lose track).

Lets take the language question out of the picture for now. Imagine we all speak Dutch (Why Dutch? No particular reason, except maybe that hexapodia is the key insight, but given how multilingual our supposed group is here, English seemed presumptuous).

Now there are many more configurations open to us. Any one of us can be the spokesperson, and each different spokesperson will likely present in a qualitatively different manner: We will present as roughly the spokesperson’s personality, and areas of knowledge close to their original source of expertise will come through far more clearly than ones which are far (an engineer will be able to convey abstract mathematical answers from a mathematician far better than a student of literature will).

Moreover our group can self-organise to pick one of these configurations. If, as a group, we decide we want to pass the Turing test we can do so. The exact manner of our deciding doesn’t matter – we might do it by voting, we might do it by lot, we might do it by trial by combat. The point is that in some way we can arrive at away of organising ourselves that passes the Turing test.

So our group of N people can, if it decides to (e.g. because we’ve said “Hey we’ll let you out of this Chinese room we’ve locked you in if you do”), pass the Turing test. And it does so in a way that is potentially recognisably very distinct from the way in which any of its constituents pass the turing test.

Which finally brings me to the point of this whole post:

If we accept that AI is a possibility, and that intelligence is a function of behaviour rather than the body in which it resides, we must also accept that intelligence may also reside in a group of individuals.

Further, I would argue that this intelligence is recognisably distinct from the intelligence of a single human being. If nothing else, you quickly get to the point where it possesses far more skills than it’s plausible for a single person to possess, but it also shows through in things like voting paradoxes: If people are voting on their answers then you can expose apparent inconsistencies in their opinions much more easily than you can for a single individual.

Does it make sense to say that an intelligence composed entirely out of human beings is non-human?

Simply, yes.

If you accept that the concept of nonhuman intelligence is not intrinsically incoherent, and that you can write a program to simulate one, then we’re back to our chinese room building a nonhuman intelligence out of a human, paper and pencil. While the paper and pencil are important, fundamentally the ability to execute the program here comes from the human. We’ve built a nonhuman intelligence out of a single human, so why can’t we do it out of many?

So we stuck a bunch of people in a room and hey, presto, a nonhuman intelligence resulted. Magic!

Now what happens if we simply… take the room away.

It seems clear to me that the room is not magic. It did not create intelligence. It may have prescribed the ways in which the intelligence may operate, but in the same way that a program we get to take the Turing test is not made intelligent by the test, the group is not made into an intelligence by the room. The group is an intelligence in its own right, the room and the test merely the means by which we observed that when it spontaneously decided to impersonate a human being and you judged its impersonation passable.

So we should regard the group as a non-human intelligence.

An amusing corollary of that is that we should regard the group of all humans as intelligent.

Like, a giant global brain, man. Woah.

Come join its cult. We have cookies.

This entry was posted in Uncategorized on by .

An auction game design

This link to You’re playing monopoly wrong did the rounds on my timeline earlier. TLDR, monopoly is supposed to involve auctions.

Two thoughts immediately occurred to me.

Firstly: But what type of auctions? What type? I have a little bit of an obsession with Vickrey auctions, so that’s obviously the answer I want.

Secondly: This still sounds like quite a shit game, sorry.

But it got me thinking about how I would design a game around auctions, and this collided with various thoughts in my head to produce the following. It’s not even had one play test, or any thought put into balance, so it may not be at all fun and would probably require heavy redesigning to be good, but it at least sounds interesting to me.

The essential idea is this:

The game set consists of a set of money tokens and a set of hexagonal playing pieces. A hexagonal playing piece has two numbers on it, cost and upkeep. There is additionally a palette of 7 colours, and each edge of a hex may be coloured with one of them. A hex may have the same colour on it multiple times. All hexes have at least two colours on them.

There are additionally a special types of starting hex, which have cost and upkeep of 0 and 6 distinct colours around each edge. There is also an ending hex. More on that later.

Play proceeds as follows:

Every player is allocated a random starting hex and $10. Their starting hex is placed face up in front of them. The non-starting hexes are shuffled together with the ending hex and placed face down in a deck.

They then take it in turns to play, proceeding clockwise.

A play consists of first drawing a hex from the deck. If this hex is the ending hex then the game immediately ends. The person with the highest amount of money wins. If there is a tie for highest money the person with the most hexes in front of them wins. If there is still a tie then the game is a tie between them.

Otherwise, this starting tile is now available to be acquired. If you wish, you may pay its cost to immediately claim it. If not, it goes up for auction.

Auctions are vickrey auctions played as follows. Each player puts some amount of money in a closed fist. Zero is an acceptable bid. When everyone is ready, they all reveal. The person who has bid the highest then pays the amount that the second highest bidder bid. If there is a joint highest bid, then the person closest to the current player (starting with the current player) in the anticlockwise direction wins but pays the highest amount bid (because the second highest is the same as it). This money is then paid to the recipient of the auction (who is the bank in this case, but may be a player in other cases) and that tile is acquired.

An acquired tile must immediately be placed adjoining one of your hexes already face up in front of you. Edges must line up, and your set of hexes must always be connected, but it may otherwise go anywhere.

You may now take up to three moves. A move consists of either:

  • taking one of your tiles in front of you and moving it anywhere else in front of you. The only restriction is that removing it may not disconnect the board, even if where you are going to place it would then reconnect it
  • Placing one of your tiles up for auction. Again, the removal of this tile must not disconnect your grid. You may bid in your own auction, effectively setting a reserve price. Any proceeds from the auction go to you.

You now collect income.

Income is calculated as follows: First gain $1 for every hex. Now count up the number of adjoining edges of hexes which have the same colour and claim $2 for each. So if you have two hexes which share a red border, that gives you $2 for the hexes and $2 for the shared edge. If they have a mixed blue/yellow border that gives you only the $2 for the hexes.

You now pay upkeep.

Upkeep consists of putting a number of dollars on each hex equal to its upkeep score. You may choose not to if you wish, and indeed you may not be able to. Once you have finished placing money for upkeep, anything that has not been paid for is removed from the game and the money you’ve placed on the tiles goes back to the bank. You may not place upkeep in a way that would cause your board to be disconnected once this happens.

Play now proceeds to the clockwise next player.

This entry was posted in Uncategorized and tagged on by .

Another group interaction experiment I’d like someone to perform

I posted a while back about an interesting experiment in group intelligence I’d like someone to perform. I thought of another interesting experiment I’d like to see performed, so it’s apparently starting to become a habit.

The experiment is as follows:

Put two people in a room. Tell them to talk to eachother. After some fixed period of time (say, 10 minutes), take them out of the room and give each of them a questionnaire. The questionnaire has N questions on of it, each of which is a binary choice. These questions could be things like “Do you mostly agree with this complex political statement?”, “You are given a choice between these two scenarios, which do you pick?” etc. Things which are very much about your value and behaviours rather than simple statements of objective facts.

Each of these questions must be filled out twice. Once with your answer, once with what you think your partner is most likely to answer.

Basic questions it would be interesting to answer:

  • What questions are people particularly good at bad or predicting?
  • Is the prediction rate asymmetric? If one person successfully predicts the other well, is the other likely to be worse or at prediction?
  • Are predictions better when the answers are different or the same?

More advanced questions that would be interesting to answer:

  • How do prediction rates differ when we vary the length of the conversation?
  • How do prediction rates differ with different primings for the conversation? e.g. rather than instructing “Talk to eachother”, say “Talk to eachother about your family”
  • How do prediction rates differ if instead of talking to eachother in person you talk via an instant messaging program? Or via a phone?
This entry was posted in Uncategorized on by .

Terra: A brief review

A link that’s gone past me a few times recently is to Terra, which is “a new low-level system programming language that is designed to interoperate seamlessly with the Lua programming language”.

It looks pretty interesting. I think it perhaps make more sense to regard it as a nice library for generating C programs from Lua than as an entirely new language, but that’s both slightly splitting hairs and still pretty exciting in its own right.

Anyway, I occasionally will write low level data processing code in C, and one thing that’s always annoying there is the lack of templated container types (yes, yes, I know I could use C++. But I don’ wanna), so I thought I’ve have a go at writing one in Terra.

In accordance with my observation that I implement a hash table about every 6 months (it’s true, seriously. I don’t know why or how it happens, but if I don’t set out to do it then circumstances force me to), I decided to have a try at implementing one in Terra as a way to get a feel for the language.

Here’s the result. Be warned that it’s pretty far from a work of art – it’s the result of maybe an hour of casual hacking.

It’s a very dumb hash table, but it does the job.

What has the experience or writing it told me about Terra?

A couple things.

As a concept, it has a lot going for it. I really liked how easy it was to integrate C – you can just import C headers and, if necessary, you can inline and compile C code right on the fly (Note in the example I just embedded a C hash function. Why? Well because I couldn’t find the bitwise operators in terra and thought it would be nice to try embedding the C one as a test case). As far as FFIs go, this is a pretty damn cool one.

In terms of templating and composability? It honestly worked exactly well as promised. A++ would use again.

In terms of ease of writing? Well…

The lua bits were lua. It’s a perfectly passable language. If you haven’t used it, imagine javascript but slightly better designed. It has some quirks, but nothing worth complaining about.

The terra bits felt awfully like writing C. That’s kinda the point.

But here’s the thing. Writing C is terrifying. There’s a lot that can go wrong.

The thing that makes writing C not terrifying is that there are good tools for dealing with most of those things that can go wrong. In particular, extensive compiler warnings for static analysis and valgrind for dynamic analysis.

Terra has none of the former and appears to break the latter. As with most languages with garbage collection it’s hard to run the main lua script under valgrind, and when emitting a pure terra executable my first non-trivial terra program seems to have encountered a bug in Terra’s interaction with Valgrind (it’s possible the bug is in Valgrind rather than Terra, but that wouldn’t be my first guess).

Edit: Actually it turns out the problem is that Terra is using AVX instructions which the version of valgrind I had installed doesn’t support. Upgrading valgrind fixes this, which makes me feel much better about using Terra.

Additionally, there are a couple things that cause me to think it’s probably a lot easier to trigger these problems in Terra than it is in C. One thing that bugs me is that the mix of one based indexing (which lua uses) and zero based indexing (which terra uses) is basically an invitation to buffer overflows and off by one errors. Terra also seems very cavalier about the difference between values and pointers to values, which I’m not totally comfortable with but expect I’d get used to.

None of this is damning, but it’s enough to give me pause. It’s a lovely idea, and I hope it does well and improves, and I may well use it for some personal experiments, but right now the idea of writing anything that might be required to have any non-trivial security property in Terra would fill me with dread, which I think rather limits its use cases for me. If this isn’t something you care about, or if you have specific use cases which don’t need it, I’d definitely encourage you to check it out.

This entry was posted in programming on by .