Come join the cult of the giant global brain

Advance warning: I’ve had to rescue this post from the eternal draft pile of doom not once, not twice, but three times now. As such I’m publishing it in a slightly less polished state than I’d like it to be in to stop me from dropping it yet again. Corrections and requests for clarification welcome.

After my post about how I think I can fly convinced all of you that I should be taken entirely seriously, I figure it’s the time to air another of my weird beliefs and persuade you all to join my bizarre, cultish, religion.

Err. Wait, the other thing.

This is basically a post about how everything is, like, connected man. It’s all one giant, global, brain, woah.

No, the other other thing.

This is a post about how intelligence is perhaps a much more fluid concept than we give it credit for, and that if you believe in AI then you should probably believe in all sorts of other non or not-entirely human intelligences too.

Actually this is something I started writing a while ago, and it got a bit out of control and ended up in the eternal drafts folder. I was reminded of it and figured it might be worth dusting off and extracting an interesting chunk of it (by rewriting it from scratch).

This is a post about something which I genuinely believe, not a thought experiment. It’s perhaps as close as I get to religion, although that’s not very close. I mean, sure, I am about to attempt to prove to you logically the existence of a supreme being, but not in a way that I think is incompatible with atheism. It’s not completely unrelated to my points about the fluidity of identity, but I don’t plan to make any explicit connection between the two in this post

Consider the Turing test…

The idea is simple. You’re talking to a computer. It has to convince you that it’s a human being. If it succeeds, you must concede that you are at least as confident that it is a sentient being as you are of other humans.

The purpose of the Turing test is to explore the boundaries of what we regard as intelligence. It is not intended to be a working definition of intelligence (it is entirely possible that there are entities which we should regard as intelligent which nevertheless think entirely differently from human beings. Many of them may even live on earth). It is a sufficient condition for intelligence, not a necessary one.

There’s an interesting test case for whether we should believe the results of the Turing test called the Chinese room experiment.

The idea is this: We have the source code for an AI who speaks Chinese. I do not speak Chinese. I am locked in a room with this source code and enough pencil and paper to function as workign memory, and someone passes in letters written in Chinese. I must manually run the program for the AI to respond to these letters.

The experiment is intended to demonstrate that the AI does not “truly” understand Chinese because I am the person doing all the actions and I do not understand Chinese.

I disagree with this conclusion.

The conclusion I draw from this thought experiment is that (conceptually at least. In practice I assume this would be impractical) you can build an AI out of a human, paper and pencil.

I’d like you now to consider another variation on the Chinese room experiment. Rather than locking me in the room with the instructions for a Chinese AI, you lock me in the room with a skilled interpreter of Chinese to English.

We can certainly imitate a person who speaks Chinese – the interpreter is a person who can speak Chinese! All they need to do to achieve that is to not talk to me.

But we can also do more than that. Unless this is one really exceptional interpreter (let’s assume that they are not) there are things I know more about than they do – mathematics, software development, the works of Hugh Cook, whatever. They can ask me questions about that.

So we can probably also do a fairly convincing imitation of someone who knows both a reasonable amount of mathematics and speaks Chinese. Depending on how the interpreter and I interact, this is unlikely to be literally “Me + I can speak Chinese” or “Them + they know maths”. It’s very much somewhere in between.

So you can build an AI out of two humans. This may seem like an uninteresting thing to do – you started off with two intelligent beings and you’re using them to simulate one.

But what’s interesting is that this AI we’ve built has a set of skills that neither of us can manage on our own. It’s a bit like what I’ve said about the effects of teamwork before (disclosure: I totally had the theme of this post in the back of my mind when I wrote that one).

This would also work if we were in separate rooms and communicating by letters. It would differently, and possibly less well, because of the details of how actual people interact and the problems with delays in communication, but it would still work.

What am I getting at here?

My point is this: If you have two entities which the Turing test forces you to regard as intelligent, you can put them in to a configuration that the Turing test forces you to regard as intelligent, and that entity will behave interestingly differently from its constituents.

And you can of course chain this. If you have three people, put them in a line of Chinese rooms where each can communicate only with its immediate neighbours and you can only communicate with one of the ends of the line. That person can ask questions of the person next to them, who can in turn pass questions further down the line.

This will of course cause a massive slow down and fidelity loss. If I’m sitting third in the line and between me and our Chinese interpreter there’s someone who only speaks French and, while they are a very good student of 19th century religious poetry, isn’t particularly strong at mathematics, they’re going to be a bit of a barrier to any mathematical questions the interpreter wants to ask me about functional analysis (my French is nicht so bien, even if our interpreter also happens to speak French).

But other configurations are also possible. For example there could be a fan shape where the interpreter can talk to any either of us and consult us to ask questions. Then we genuinely could present as a Chinese speaking individual with extensive knowledge about both mathematics and 19th century religious French poetry. This then works perfectly well no matter how many people we add in (although at some point I expect our spokesperson will start to lose track).

Lets take the language question out of the picture for now. Imagine we all speak Dutch (Why Dutch? No particular reason, except maybe that hexapodia is the key insight, but given how multilingual our supposed group is here, English seemed presumptuous).

Now there are many more configurations open to us. Any one of us can be the spokesperson, and each different spokesperson will likely present in a qualitatively different manner: We will present as roughly the spokesperson’s personality, and areas of knowledge close to their original source of expertise will come through far more clearly than ones which are far (an engineer will be able to convey abstract mathematical answers from a mathematician far better than a student of literature will).

Moreover our group can self-organise to pick one of these configurations. If, as a group, we decide we want to pass the Turing test we can do so. The exact manner of our deciding doesn’t matter – we might do it by voting, we might do it by lot, we might do it by trial by combat. The point is that in some way we can arrive at away of organising ourselves that passes the Turing test.

So our group of N people can, if it decides to (e.g. because we’ve said “Hey we’ll let you out of this Chinese room we’ve locked you in if you do”), pass the Turing test. And it does so in a way that is potentially recognisably very distinct from the way in which any of its constituents pass the turing test.

Which finally brings me to the point of this whole post:

If we accept that AI is a possibility, and that intelligence is a function of behaviour rather than the body in which it resides, we must also accept that intelligence may also reside in a group of individuals.

Further, I would argue that this intelligence is recognisably distinct from the intelligence of a single human being. If nothing else, you quickly get to the point where it possesses far more skills than it’s plausible for a single person to possess, but it also shows through in things like voting paradoxes: If people are voting on their answers then you can expose apparent inconsistencies in their opinions much more easily than you can for a single individual.

Does it make sense to say that an intelligence composed entirely out of human beings is non-human?

Simply, yes.

If you accept that the concept of nonhuman intelligence is not intrinsically incoherent, and that you can write a program to simulate one, then we’re back to our chinese room building a nonhuman intelligence out of a human, paper and pencil. While the paper and pencil are important, fundamentally the ability to execute the program here comes from the human. We’ve built a nonhuman intelligence out of a single human, so why can’t we do it out of many?

So we stuck a bunch of people in a room and hey, presto, a nonhuman intelligence resulted. Magic!

Now what happens if we simply… take the room away.

It seems clear to me that the room is not magic. It did not create intelligence. It may have prescribed the ways in which the intelligence may operate, but in the same way that a program we get to take the Turing test is not made intelligent by the test, the group is not made into an intelligence by the room. The group is an intelligence in its own right, the room and the test merely the means by which we observed that when it spontaneously decided to impersonate a human being and you judged its impersonation passable.

So we should regard the group as a non-human intelligence.

An amusing corollary of that is that we should regard the group of all humans as intelligent.

Like, a giant global brain, man. Woah.

Come join its cult. We have cookies.

This entry was posted in Uncategorized on by .

7 thoughts on “Come join the cult of the giant global brain

  1. James Heaver

    We’ve had this conversation in the past I think. This is essentially what I believe.

    The groupings that you look at are also fluid and overlapping too. The people who live or work in London are an entity with a personality. The people who are mathematicians are an entity. Some mathemticians live or work in London and contribute to both entities.

    This is also the closest I get to patriotism. I am part of the intelligence known as England (also Britian, EU, cosmopolitan cities, Western Civilisation, World). England are the parents who memetically bore me, and the child I have a memetic influence on. I can be both proud of (BBC, NHS, Tolpuddle Martyrs, Bovril) and condem (Imperialism, Tories, Sexual repression, Top Gear) my parents and children for various actions. But ultimately they are of me and I of them. The difference to traditional patriotism is that Englishness is just one of the intelligences to which I belong, and by no means the most important.

    Entities can also interact in interesting ways. Britains communication with America happens in various ways – both official and unofficial diplomatic and poitical routes and human interactions and cultural communication. Holidays, marriages, migration, British celebrities in America, American celebrities in britain. All of these interactions present a personality of the other country.

    I also think you can extend it the other way and consider evolution to be an intelligent, problem solving (not that I’m ascribing intentionality to evolution, but I don’t neccersarily ascribe intentionality to all human intelligence).

    Ultimately I believe that intelligence starts far below us and extends up to an intelligence of the whole universe. I call this whole range of intelligence “God”. A God which knows of us no more than a tea leaf knows the history of the East India Company, nor the East India Company knows the life and dreams of a tea leaf.

    Ken Macleod mentions in passing some beliefs like this in a couple of his books. I think he had a name for it, but I can’t remember.

  2. Thomas Themel

    Thanks, I’ll try those cookies, but stick to my own Kool Aid, thanks…

    Casual science fiction recommendation that you’ll probably have read: A Fire Upon the Deep (has, among other fun stuff, group minds). Revelation literature recommendation that I’m not sure about – The Beginning of Infinity.

    1. david Post author

      I have indeed read, and liked, Fire Upon the Deep. It’s not really the same sort of thing, as I’m not positing that our “group minds” are quite that directly relatable as sentient entities in their own right so much as useful to think of as intelligent.

      Beginning of infinity looks… interesting. I’m not sure if in a good way or not. I have a longish stack of non-fiction to read at the moment, but I’ll stick it on the back log. :-)

  3. Pingback: Customers who read this blog might also like… | David R. MacIver

  4. Pingback: Thesis: Job hopping is good for companies | David R. MacIver

  5. Pingback: Warning: This blog has secret mind control powers | David R. MacIver

  6. Pingback: Best of drmaciver.com | David R. MacIver

Comments are closed.