Category Archives: Uncategorized

A refinement to the auction game

I did some thinking about the auction game I previously designed and I decided on some refinements that I think would make it a better game.

Specifically, ditching the “upkeep” and “cost” numbers.

Instead when you draw a tile it immediately becomes your own (you can of course immediately auction it as one of your moves if you want). You may place it anywhere on your board as usual. This eliminates the cost as a variable that needs balancing, and also means that being behind isn’t a crippling disadvantage.

Upkeep is now played as follows: Your starting tile requires no upkeep. Everything else requires upkeep equal to its distance from the starting tile. Note that removing tiles may change the cost, so be careful!

As a side note I’ve been thinking about the physical mechanism of playing. In particular:

When you place a tile up for auction, put it in the middle of the table. If you win with your reserve, then when you reclaim it you may place it anywhere you like. So an auction action includes a move action for free.

When collecting income: Put $1 on each hex. Now for each hex put an additional $1 on it for each matched colour on its border. Now collect all that money.

When paying income: Put money on hexes equal to their income cost. You may not put money on a hex unless there is already money on some hex connecting it to the origin through a trail of moneyed hexes. When you put money on the hex, it must be one higher than the money on the cheapest connecting hex. Note that if you then backfill you should double check that you’ve not missed taking money off any hexes. Once this process is complete, take all the money off the hexes and give it to the bank.

And, in related news, I have a set of tiles for this game being delivered! I decided this evening to have a play with The Game Grafter and put together a set of hexes for it. I wrote a small python script using PIL to generate a set of hexes, uploaded them, wrote another small python script using requests to auto-proof them (I checked a few, but I had no interest in manually clicking proof on 80 identical hexagons), and sorted. I’ve placed an order and they should be with me in a few weeks.

For your delectation, have some of the hexagons I generated. A work of art they ain’t, but I’m still relatively pleased with them.

hex01

hex19

starting_hex1

starting_hex2

This entry was posted in Uncategorized on by .

But enough about me…

Man, I sure do write a lot on here, don’t I? I practically hog the microphone. Selfish, that’s what I call it. It’s like I think this blog is all about me. Well, if this blog is all about me why isn’t my name on it? Answer me that?

Err.

Right.

Anyway, enough about me. Tell me about you and some of the stuff you’ve been doing recently. Open thread. Please feel free to use it for shameless self promotion of projects, blog posts, etc.

This entry was posted in Uncategorized on by .

Problem solving when you can’t handle the truth

I’m in the process of reading The Alex Benedict series by Jack McDevitt. It’s quite good. Not amazing, but quite good. Our protagonist is an antiquities dealer in the far future. The far future is remarkably un post singularity and is indeed quite like the present only with star ships, AIs and human simulation, but setting that aside it’s an interesting vision of what living in a world with a lot of high tech history is like. Humanity has at the point in this series been space-faring for longer than we’ve currently had written language.

That’s not what this post is about though.

There’s an interesting instance of practical problem solving in the first book. What follows is a moderately spoilery and highly paraphrased version of what happens (it literally has no writing in common with the actual version, which is much better written and longer. This is a condensed version to illustrate the point I’m making):

Our protagonists, Alex and Chase, are on a derelict ship they have found after their own one has been destroyed. It’s old but broadly operational.

Chase: Enemy cruiser bearing down on us. Yikes.
Alex: OK. Lets head out of the gravity well and prepare to go to warp.
Chase: Um. No.
Alex: ?
Chase: This ship doesn’t have FTL. Do you remember the great big hole where the warp engines on this ship are supposed to be?
Alex: Yeah, but maybe they solved that.
Chase: ???
Alex: This ship had to have got here somehow. The computer is claiming the FTL is working. Therefore we should give some credence to the idea that this ship has magical FTL we don’t understand.
Chase: That is the most ridiculous thing I’ve ever heard.
Alex: Look. We’re completely hosed if it’s not true. These people will never let us live. Therefore if we don’t have FTL we’re dead. Therefore there’s no point worrying about that possibility, and we must proceed as if our ship has magical FTL the likes of which we know not.

Unsurprisingly, this being a novel, Alex and Chase get out of the gravity well after a dramatic scene or two and their ship does indeed turn out to have magical FTL powers.

Real life, sadly, is not a novel. In a more realistic scenario it is entirely likely that they would get out of the gravity well, the computer would say “Oh, yeah. Sorry, my bad. Software glitch. It’s totes not possible to go to warp because you don’t have any frickin engines”. At which point the enemy ship would fire the photon torpedoes (note: Actual in book terminology is way less Star Trek than I’m making it out to be) and reduce Alex and Chase to a thin smear of very annoyed hyper-charged particles.

But that’s OK.

Well, I mean, it’s not OK for Alex and Chase. They’re a bit dead.

But it’s OK in the sense that it does not in any way invalidate Alex’s reasoning strategy.

You see, it may look superficially like Alex is trying to answer the question “Do I have FTL capability?”. He has formed a hypothesis (“I have magical FTL capability despite my lack of warp engines”) and he is performing an experiment to test that hypothesis (“I will go out to warp range and press the big red button”).

This is not what Alex is trying to do. Alex is in fact trying to survive.

He does not have any convincing evidence that he has a warp drive. He has strong convincing evidence that he does not in fact. But if he doesn’t then there’s literally nothing he can do about it. There is no feasible solution to the survival problem in that case, so he doesn’t worry about it. He proceeds as if the thing he needs to survive is true and if not, well, he’s dead anyway. Such is life.

This is an extremely powerful reasoning strategy.

It holds true in cases other than certain death as well: In general, when considering what hypotheses to test and what possibilities to worry about, you should consider not just “How likely is this to be true?” but also “How likely is finding out this is true to be helpful?”

This is one of the reasons for Occam’s razor.

Occam’s razor is less a fundamental truth about the universe (there are some arguments in favour of it as such, but I’ve not found them terribly convincing) and more a pragmatic tool for problem solving.

Occam’s razor states that given two theories explaining the data equally well, you should prefer the simpler one.

Why? Well, because the simpler one is far more useful if it’s true. A simple theory is easier to make predictions with. A complex theory might be true, but if it is our life is quite difficult and that’s not very useful, so we should first rule out the more helpful possibilities.

There needs to be a balancing act here of course. If I have two options, one fairly unhelpful but likely and one extremely helpful but pretty unlikely, I should probably spend more time worrying about the former rather than the latter.

If I had to boil this down into a general maxim, I think it would be the following: Take actions which maximize your chance of success, not ones which maximize your chance of finding out the truth.

Sometimes these two paths coincide. Perhaps even often. Sometimes though, you can’t handle the truth, and you’re probably better off not worrying about those cases.

This entry was posted in Uncategorized on by .

A crude voting simulation

I decided to perform a fairly crude voting simulation to compare different ranked methods.

The simulation was performed as follows: An individual was a point on the unit sphere in N-dimensional Euclidean space (I chose N = 3). The agreement of two individuals is their dot product. A number of candidates (I chose 6) were drawn uniformly at random on this sphere. 500 more individuals were then drawn, again uniformly at random, and vote for candidates in order of how much they agree with them.

My question was this: With what frequency would various voting systems produce the same answer?

Systems I implemented were First Past The Post, Alternative Vote, Random Ballot (Pick a random voter, use their top choice), Random Ranked Pair (pick two random candidates, pick the one the majority of voters preferred), and random Condorcet (pick a random generalized Condorcet winner)

I ran 2000 simulations of this. Here are the agreement stats:

AV FPTP RandomBallot RandomCondorcet RandomRankedPair
AV 100.00% 44.80% 22.50% 41.30% 24.60%
FPTP 44.80% 100.00% 25.50% 29.60% 18.35%
RandomBallot 22.50% 25.50% 100.00% 19.05% 17.05%
RandomCondorcet 41.30% 29.60% 19.05% 100.00% 27.70%
RandomRankedPair 24.60% 18.35% 17.05% 27.70% 100.00%

Here is the same simulation run with the number of dimensions increased to 6:

AV FPTP RandomBallot RandomCondorcet RandomRankedPair
AV 100.00% 43.75% 20.95% 42.10% 24.00%
FPTP 43.75% 100.00% 23.75% 29.60% 19.90%
RandomBallot 20.95% 23.75% 100.00% 19.45% 17.05%
RandomCondorcet 42.10% 29.60% 19.45% 100.00% 25.50%
RandomRankedPair 24.00% 19.90% 17.05% 25.50% 100.00%

And here it is decreased to 2:

AV FPTP RandomBallot RandomCondorcet RandomRankedPair
AV 100.00% 46.80% 25.60% 47.10% 20.90%
FPTP 46.80% 100.00% 29.50% 33.25% 17.55%
RandomBallot 25.60% 29.50% 100.00% 20.45% 16.10%
RandomCondorcet 47.10% 33.25% 20.45% 100.00% 26.00%
RandomRankedPair 20.90% 17.55% 16.10% 26.00% 100.00%

I don’t really have anything interesting to say about these results other than “Huh. Different voting systems sure do produce different results, don’t they?”. It’s perhaps mildly interesting that AV is about as close to random condorcet as it is to FPTP, but I’m not sure that’s really significant. I’m mostly just posting this because I believe in publishing null results.

Here’s the code if you want to have a play with it. Advance warning: A work of finely crafted engineering it ain’t.

Update: One slightly interesting result from this. I modified it to add my condorcet variant on AV into the mix to see how it would do. It agrees with stock AV about 48% of the time and with “random condorcet winner” about 71% of the time. This suggests that most of the time we’ve got a unique condorcet winner here and AV isn’t picking it. To test this I decided to look at the distribution of the number of condorcet winners:

Number of condorcet winners Number of runs
1 1299
2 50
3 79
4 101
5 149
6 322

Which is about right. About 65% of the time there’s a unique Condorcet winner, and Condorcet-AV agrees with a random Condorcet winner some fraction of the rest of the time.

This entry was posted in Uncategorized on by .

Come join the cult of the giant global brain

Advance warning: I’ve had to rescue this post from the eternal draft pile of doom not once, not twice, but three times now. As such I’m publishing it in a slightly less polished state than I’d like it to be in to stop me from dropping it yet again. Corrections and requests for clarification welcome.

After my post about how I think I can fly convinced all of you that I should be taken entirely seriously, I figure it’s the time to air another of my weird beliefs and persuade you all to join my bizarre, cultish, religion.

Err. Wait, the other thing.

This is basically a post about how everything is, like, connected man. It’s all one giant, global, brain, woah.

No, the other other thing.

This is a post about how intelligence is perhaps a much more fluid concept than we give it credit for, and that if you believe in AI then you should probably believe in all sorts of other non or not-entirely human intelligences too.

Actually this is something I started writing a while ago, and it got a bit out of control and ended up in the eternal drafts folder. I was reminded of it and figured it might be worth dusting off and extracting an interesting chunk of it (by rewriting it from scratch).

This is a post about something which I genuinely believe, not a thought experiment. It’s perhaps as close as I get to religion, although that’s not very close. I mean, sure, I am about to attempt to prove to you logically the existence of a supreme being, but not in a way that I think is incompatible with atheism. It’s not completely unrelated to my points about the fluidity of identity, but I don’t plan to make any explicit connection between the two in this post

Consider the Turing test…

The idea is simple. You’re talking to a computer. It has to convince you that it’s a human being. If it succeeds, you must concede that you are at least as confident that it is a sentient being as you are of other humans.

The purpose of the Turing test is to explore the boundaries of what we regard as intelligence. It is not intended to be a working definition of intelligence (it is entirely possible that there are entities which we should regard as intelligent which nevertheless think entirely differently from human beings. Many of them may even live on earth). It is a sufficient condition for intelligence, not a necessary one.

There’s an interesting test case for whether we should believe the results of the Turing test called the Chinese room experiment.

The idea is this: We have the source code for an AI who speaks Chinese. I do not speak Chinese. I am locked in a room with this source code and enough pencil and paper to function as workign memory, and someone passes in letters written in Chinese. I must manually run the program for the AI to respond to these letters.

The experiment is intended to demonstrate that the AI does not “truly” understand Chinese because I am the person doing all the actions and I do not understand Chinese.

I disagree with this conclusion.

The conclusion I draw from this thought experiment is that (conceptually at least. In practice I assume this would be impractical) you can build an AI out of a human, paper and pencil.

I’d like you now to consider another variation on the Chinese room experiment. Rather than locking me in the room with the instructions for a Chinese AI, you lock me in the room with a skilled interpreter of Chinese to English.

We can certainly imitate a person who speaks Chinese – the interpreter is a person who can speak Chinese! All they need to do to achieve that is to not talk to me.

But we can also do more than that. Unless this is one really exceptional interpreter (let’s assume that they are not) there are things I know more about than they do – mathematics, software development, the works of Hugh Cook, whatever. They can ask me questions about that.

So we can probably also do a fairly convincing imitation of someone who knows both a reasonable amount of mathematics and speaks Chinese. Depending on how the interpreter and I interact, this is unlikely to be literally “Me + I can speak Chinese” or “Them + they know maths”. It’s very much somewhere in between.

So you can build an AI out of two humans. This may seem like an uninteresting thing to do – you started off with two intelligent beings and you’re using them to simulate one.

But what’s interesting is that this AI we’ve built has a set of skills that neither of us can manage on our own. It’s a bit like what I’ve said about the effects of teamwork before (disclosure: I totally had the theme of this post in the back of my mind when I wrote that one).

This would also work if we were in separate rooms and communicating by letters. It would differently, and possibly less well, because of the details of how actual people interact and the problems with delays in communication, but it would still work.

What am I getting at here?

My point is this: If you have two entities which the Turing test forces you to regard as intelligent, you can put them in to a configuration that the Turing test forces you to regard as intelligent, and that entity will behave interestingly differently from its constituents.

And you can of course chain this. If you have three people, put them in a line of Chinese rooms where each can communicate only with its immediate neighbours and you can only communicate with one of the ends of the line. That person can ask questions of the person next to them, who can in turn pass questions further down the line.

This will of course cause a massive slow down and fidelity loss. If I’m sitting third in the line and between me and our Chinese interpreter there’s someone who only speaks French and, while they are a very good student of 19th century religious poetry, isn’t particularly strong at mathematics, they’re going to be a bit of a barrier to any mathematical questions the interpreter wants to ask me about functional analysis (my French is nicht so bien, even if our interpreter also happens to speak French).

But other configurations are also possible. For example there could be a fan shape where the interpreter can talk to any either of us and consult us to ask questions. Then we genuinely could present as a Chinese speaking individual with extensive knowledge about both mathematics and 19th century religious French poetry. This then works perfectly well no matter how many people we add in (although at some point I expect our spokesperson will start to lose track).

Lets take the language question out of the picture for now. Imagine we all speak Dutch (Why Dutch? No particular reason, except maybe that hexapodia is the key insight, but given how multilingual our supposed group is here, English seemed presumptuous).

Now there are many more configurations open to us. Any one of us can be the spokesperson, and each different spokesperson will likely present in a qualitatively different manner: We will present as roughly the spokesperson’s personality, and areas of knowledge close to their original source of expertise will come through far more clearly than ones which are far (an engineer will be able to convey abstract mathematical answers from a mathematician far better than a student of literature will).

Moreover our group can self-organise to pick one of these configurations. If, as a group, we decide we want to pass the Turing test we can do so. The exact manner of our deciding doesn’t matter – we might do it by voting, we might do it by lot, we might do it by trial by combat. The point is that in some way we can arrive at away of organising ourselves that passes the Turing test.

So our group of N people can, if it decides to (e.g. because we’ve said “Hey we’ll let you out of this Chinese room we’ve locked you in if you do”), pass the Turing test. And it does so in a way that is potentially recognisably very distinct from the way in which any of its constituents pass the turing test.

Which finally brings me to the point of this whole post:

If we accept that AI is a possibility, and that intelligence is a function of behaviour rather than the body in which it resides, we must also accept that intelligence may also reside in a group of individuals.

Further, I would argue that this intelligence is recognisably distinct from the intelligence of a single human being. If nothing else, you quickly get to the point where it possesses far more skills than it’s plausible for a single person to possess, but it also shows through in things like voting paradoxes: If people are voting on their answers then you can expose apparent inconsistencies in their opinions much more easily than you can for a single individual.

Does it make sense to say that an intelligence composed entirely out of human beings is non-human?

Simply, yes.

If you accept that the concept of nonhuman intelligence is not intrinsically incoherent, and that you can write a program to simulate one, then we’re back to our chinese room building a nonhuman intelligence out of a human, paper and pencil. While the paper and pencil are important, fundamentally the ability to execute the program here comes from the human. We’ve built a nonhuman intelligence out of a single human, so why can’t we do it out of many?

So we stuck a bunch of people in a room and hey, presto, a nonhuman intelligence resulted. Magic!

Now what happens if we simply… take the room away.

It seems clear to me that the room is not magic. It did not create intelligence. It may have prescribed the ways in which the intelligence may operate, but in the same way that a program we get to take the Turing test is not made intelligent by the test, the group is not made into an intelligence by the room. The group is an intelligence in its own right, the room and the test merely the means by which we observed that when it spontaneously decided to impersonate a human being and you judged its impersonation passable.

So we should regard the group as a non-human intelligence.

An amusing corollary of that is that we should regard the group of all humans as intelligent.

Like, a giant global brain, man. Woah.

Come join its cult. We have cookies.

This entry was posted in Uncategorized on by .