Category Archives: Performing philosophy without a license

Determinism is topologically impossible

I’ve been doing some work on topological models of decision making recently (I do weird things for fun) and a result popped out of it that I was very surprised by, despite it being essentially just a restatement of some elementary definitions in topology.

The result is this: Under many common models of reality, there are no non-trivial deterministic experiments we can perform even if the underlying reality is deterministic.

The conclusion follows from two very simple assumptions (either of which may be wrong but both of which are very physically plausible).

  1. We are interested in some model of reality as a connected topological space \(X\) (e.g. \(\mathbb{R}^n\), some Hilbert space of operators, whatever).
  2. No experimental outcome can give us infinite precision about that model. i.e. any experimental outcome only tells us where we are up to membership of some open subset of \(X\).

Under this model, regardless of the underlying physical laws, any fully deterministic experiment tells us nothing about the underlying reality.

What does this mean?

Let \(X\) be our model of reality and let \(\mathcal{O}\) be some set of experimental outcomes. A deterministic experiment is some function \(f: X \to \mathcal{O}\).

By our finite precision assumption each of the sets \(U_o = \{x \in X: f(x) = o\}\) are open. But if \(f(x) = o\) and \(o \neq o’\) then \(f(x) \neq o’\) so \(x \not\in U_{o’}\). Therefore they’re disjoint.

But certainly \(x \in U_{f(x)}\), so they also cover \(X\).

But we assumed that \(X\) is connected. So we can’t cover it by disjoint non-empty open sets. Therefore at most one of these sets is non-empty, and thus \(X = U_o\) for some \(o\). i.e. \(f\) constantly takes the value \(o\) and as a result tells us nothing about where we are in \(X\).

Obviously this is a slightly toy model, and the conclusion is practically baked into the premise, so it might not map to reality that closely.

But how could it fail to do so?

One way it can’t fail to do so is that the underlying reality might “really” be disconnected. That doesn’t matter, because it’s not a result about the underlying reality, it’s a result about models of reality, and most of our models of reality are connected regardless of whether the underlying reality is. But certainly if our model is somehow disconnected (e.g. we live in some simulation by a cellular automaton) then this result doesn’t apply.

It could also fail because we have access to experiments that grant us infinite precision. That would be weird, and certainly doesn’t correspond to any sort of experiment I know about – mostly the thing we measure reality with is other reality, which tends to put a bound on how precise we can be.

It could also fail to be interesting in some cases. For example if our purpose is to measure a mathematical constant that we’re not sure how to calculate then we want the result of our experiment to be a constant function (but note that this is only for mathematical constants. Physical constants that vary depending on where in the space of our model we are don’t get this get out clause).

There are also classes of experiments that don’t fall into this construction: For example, it might be that \(O\) itself has some topology on it, our experiments are actually continuous functions into O, and that we can’t actually observe which point we get in \(O\), only its value up to some open set. Indeed, the experiments we’ve already considered are the special case where \(O\) is discrete. The problem with this is that then \(f(X)\) is a connected subset of \(O\), so we’ve just recursed to the problem of determining where we are in \(O\)!

You can also have experiments that are deterministic whenever they work but tell you nothing when they fail. So for example you could have an experiment that returns \(1\) or \(0\), and whenever it returns \(1\) you know you’re in some open set \(U\), but when it returns \(0\) you might or might not be in \(U\), you have no idea. This corresponds to the above case of \(O\) having a topology, where we let \(O\) be the Sierpinski space. This works by giving up on the idea that \(0\) and \(1\) are “distinguishable” elements of the output space – under this topology, the set \(\{0\}\) is not open, and so the set \(U_0\) need not be, and the connectivity argument falls apart.

And finally, and most interestingly, our experiment might just not be defined everywhere.

Consider a two parameter model of reality. e.g. our parameters are the mass of a neutron and the mass of a proton (I know these vary because binding energy or something, but lets pretend they don’t for simplicity of example). So our model space is \((0, \infty)^2\) – a model which is certainly connected, and it’s extremely plausible that we cannot determine each value to more than finite precision. Call these parameters \(u\) and \(v\).

We want an experiment to determine whether protons are more massive than neutrons.

This is “easy”. We perform the following sequence of experiments: We measure each of \(u\) and \(v\) to within a value of \(\frac{1}{n}\). If \(|u – v| > \frac{2}{n}\) then we know their masses precisely enough to answer the question and can stop and return the answer. If not, we increase \(n\) and try again.

Or, more abstractly, we know that the sets \(u > v\) and \(v < u\) are open subsets of our model, so we just return whichever one we’re in.

These work fine, except for the pesky case where \(u = v\) – protons and neutrons are equally massive. In that case our first series of experiments never terminates and our second one has no answer to return.

So we have deterministic experiments (assuming we can actually deterministically measure things to that precision, which is probably false but I’m prepared to pretend we can for the sake of the example) that give us the answer we want, but it only works in a subset of our model: The quarter plane with the diagonal removed, which is no longer a connected set!

Fundamentally, this is a result about boundaries in our models of reality – any attempt to create a deterministic experiment will run into a set like the above plane: Suppose we had a deterministic experiment which was defined only on some subset of \(X\). Then we could find some \(o\) with \(U_o\) a non-empty proper subset of \(X\). Then the set \(\overline{U} \cap U^c\) where the closure of \(U_o\) meets its complement (which is non-empty because \(X\) is connected) is a boundary like the diagonal above – on one side of it we know that \(f\) returns \(o\). On the other side we know that it doesn’t return \(o\), but in the middle at the boundary it is impossible for us to tell.

What are the implications?

Well, practically, not much. Nobody believed that any of the experiments we’re currently performing are fully deterministic anyway.

But philosophically this is interesting to me for a couple of reasons:

  1. I for one was very surprised that such a trivial topological result had such a non-trivial physical interpretation.
  2. The idea that non-determinism was some intrinsic property of measurement and not a consequence of underlying physical non-determinism is not one that had ever previously occurred to me.
  3. We need to be very careful about boundaries in our models of reality, because we often can’t really tell if we’re on them or not.
  4. It may in fact be desirable to assume that all interesting quantities are never equal unless we have a deep theoretical reason to believe them to be equal, which largely lets us avoid this problem except when our theory is wrong.

(As per usual, if you like this sort of thing, vote with your wallet and support my writing on Patreon! I mean, you’ll get weird maths posts either way, but you’ll get more weird maths posts, and access to upcoming drafts, if you do).

What makes someone real?

I’ve been listening to (and enjoying) the band Savlonic recently, and it’s prompted me to revisit some questions that have been at the back of my mind recently.

See, the interesting thing about Savlonic (other than that they do fun music) is that it’s a band consisting entirely of fictional characters. The band members, Roscoe, Evangeline and Kandi, are not actually real people. They’re animated characters.

But the question, to me, is what does this actually mean? Why aren’t they real?

The band is unquestionably real in that it performs the function of a band: There are songs by Savlonic. AlphaGo may not be a person, but it plays Go and is thus a Go player. Savlonic may not be made of people, but it creates and performs songs and is thus a band.

Two of the individual band members have always mapped pretty closely onto real life people: Roscoe is voiced by Weebl and Evangeline by Sarah Darling. The third, Kandi, started out “life” entirely unvoiced and when singing was just a pitch shifted version of Sarah. These days Kandi is voiced by Katt Wade.

So what makes the members of the band fictional as opposed to just stage names? It’s hardly unusual to go by a stage name, and many (possibly almost all) peoples’ public personas are very different from their private personas. Do we consider stage personas to be fictional characters? What’s the difference between a full blown stage persona and someone just acting a bit different in public?

(Note: This is different from e.g. an actor in a movie. Luke Skywalker does not fulfill the role of an actual Jedi in the same way that Roscoe, Evangeline and Kandi form the role of a real band member).

The big thing that makes them fictional rather than stage personas is that as far as I know, that’s what the actors and characters say they are.

I’m not sure how I feel about self-declaration as the sole determiner of reality though. It’s probably necessary, but is it sufficient? If we stuck a voice box on AlphaGo that shouted “AlphaGo is alive! No disassemble!” would AlphaGo now be a real person? It would certainly simplify the Turing test if so.

The Turing test won’t work as a determiner either: If we wanted to give Evangeline the Turing test it would be pretty easy – we’d just be talking to Sarah, who is probably quite good at convincing you that she’s a real person.

Is the fact that Sarah would be lying when she tells you her name is Evangeline the determiner of whether Evangeline is real? What’s a real name anyway? What if the character were called Evangeline because that’s a nickname Sarah often goes by? (To the best of my knowledge it is not). Is it the fictional biography? If your long interview never really touched on any biographical details would that somehow not make it a valid test of reality? What distinguishes a fictional person from a real person who is just lying?

I don’t really have any conclusions, except to note that “real person” seems to be a fuzzier concept than might naturally be assumed. I do think Savlonic fall on the “not actually real people” side, but I’m not entirely sure why I think that.

This entry was posted in Performing philosophy without a license on by .

A genie brokers a deal

Content note: This piece contains some fairly non-graphic discussion of death and serious injury.

This is effectively a trolley problem inspired by my last post. It’s not directly in support of the same point and is perhaps slightly orthogonal to it.


 

You wake up, slightly disoriented. You’re not really sure where you are or how you got there, but you have this weird metal band firmly fastened around the elbow of your non-dominant arm.

A genie appears. You can tell that it’s a genie because it’s bright blue and has smoke instead of legs and that’s what genies look like. Your situation now makes more sense. Or possibly less.

“Hello”, says the genie. “I’m here to offer you a deal. You see that band on your arm? At a thought, I can make that band constrict and cut off your arm. Don’t worry it comes with anaesthetic and a thing for sealing the wound, so it won’t hurt at all and there’s no risk of complications or injury. You’ll just lose the arm.”

This is not entirely reassuring.

“But it’s OK! Once I’ve finished explaining the situation to you you’ll have the option to just leave. I’ll take the band off, no harm will come to you, and I’ll return you home just as you were.”

This is a little more reassuring.

“But I’d like to know: What if I were to offer you a deal? You can have $5 million in exchange for letting me contract the band and cut off your arm. What do you say?”

You have a think about it. $5 million could buy you a lot – you could invest it and live relatively comfortably off the interest alone. It would let you do all sorts of things with your life that you’ve always wanted to and couldn’t. Sure, your arm is useful, but it’s not that useful, and a small fraction of $5 million will buy you some pretty good prosthetics. You say you would take the deal.

“Good to know. And what if I were to offer you $1 million?”

This seems like less of a good deal but you acknowledge that you would probably still take it.

“500k?”

This is small enough that you think on balance it’s probably not worth the loss of the arm.

“How about 250k?”

You indicate that your preferences over quantities of money are monotonic increasing in the numerical value, and that as a good rational being you have transitive preferences over events, so decreasing the amount of money is not going to make this more favourable. You do your best to not sound patronising while explaining this to the all powerful being who is offering to cut off your arm.

“Fine, fine. Just checking. Humans are so irrational, I wanted to be sure. Now, lets talk about something else”

You’re still quite interested in the subject of $5 million dollars and whether you get to keep your arm, but figure you can go along with the change of subject for now.

The genie waves his hand and an image appears in the air. It is of the genie and another person. The person in question has a band much like yours, but this one is around their neck.

The genie explains. “When I found this person they were about to be hit by a train. Would have squashed them flat. So I rescued them! Wasn’t that nice of me?”

You decide to reserve judgement until you hear more about the band.

“Now, watch this conversation I had with them!”

The image starts to move and sound comes from it. You’d be very impressed by the genie’s powers if you hadn’t previously watched television with much better resolution.

“Hello! I’d like to offer you a deal.”

“Holy shit a genie”

“Yes, yes. Well observed. Now, about this deal. You notice that band you’ve got around your neck? Unless I take it off you it will constrict and cut your head off. Now I’m not that familiar with human physiology, but I believe this will be fatal”

“Aaaaaaaaaaaaah. Get it off! Get it off!”

“But I want something in exchange for removing it. I’m going to take all your money. Does that seem fair?”

“Yes, anything! I just don’t want to die!”

“Are you sure? You’ve got that will and all…”

“I don’t care about the will! Just take off the band! Please!”

“Hmm. I’ll get back to you on that”

The genie in the image disappears and the person remains behind, sobbing pitifully. Back with you, the genie waves his hand and the image freezes again.

“Now”,  the genie explains, “By a complete coincidence all of this person’s money is almost exactly $5 million. So here’s the deal I’m offering you: One of these bands is going to close. It’s up to you which one. It is also up to you how much of their money you wish to take. You can either choose to walk way from this and I’ll kill them, or you can choose to lose your arm and take any amount of money from them up to $5 million and I’ll arrange that too. Your call. What will it be?”


Now, questions for the reader:

  1. Are you morally obligated to give up your arm?
  2. If you do give up your arm, how much money is it ethically permissible for you to take?
  3. Regardless of what you should answer, would your answer change if you knew that the details of the deal would be publicly broadcast?
  4. Has the genie acted ethically? Note that every possible outcome regardless of what you choose is no worse than the status quo before the genie’s intervention and may be better.
This entry was posted in Performing philosophy without a license, Uncategorized on by .

A satisfying resolution to trolley problems

Note: This post is based on a conversation I had with Dave about a month ago. I was recently reminded of it by a discussion on moral philosophy and thought it might be worth writing up.

Certain types of moral philosophers are very keen on trolley problems. You may have encountered these. They’re endless variants on approximately the following problem:

A runaway trolley is hurtling down a track. If it continues on its original route it will hit and kill 5 people. You have the opportunity to pull a lever which will divert it onto another route where it will only kill one person. No you can’t be a smartarse and come up with a clever solution which avoids these options, because reasons.

I’m not a fan of trolley problems. I think they oversimplify the landscape in which you make moral decisions. Most decisions are not this clear cut: Not only do you have uncertainty about the outcomes (which you can reason about), you also shouldn’t entirely trust your own reasoning process in these circumstances because you’re in a heightened emotional state which will introduce weird biases. So although you the person being asked this question might have perfect knowledge about the situation and know that you are detached from it and acting without emotional involvement, the hypothetical you standing there with  lever and a difficult decision to make does not have these reassurances. Have they missed something? Are they sure their judgement isn’t horribly compromised by the fact that they’re in a panic? Do they even have time to think things through or must they just act or not act before the decision is taken out of their hands?

Still, thinking through what you should do in these sorts of scenarios before you are in them is exactly what you need in order to sharpen your moral sense and ensure that you act correctly without having to think things through, right?

So here you are. You have two options. You can choose not to act, and let 5 people die, or you can act and kill someone, saving five peoples’ lives.

I think there are legitimate and consistent moral systems in which you choose not to act. I think they’re not the sort you want to apply when designing larger scale systems of action, but I think as a personal choice it’s completely valid and it might well be the option I’d take if you put me at the lever of an actual trolley.

What I want to talk about is when you choose to act. I think there is a strictly superior option which cannot be finagled away by redesigning the question that anyone who thinks that they should pull the lever should follow instead.

Which is that you pull the lever and then you turn yourself in to the relevant authorities for having committed murder.

Oh you will, and you probably should, get a reduced sentence because of all those lives you saved, but you’ve taken a life, and you should be treated accordingly.

Why should you be punished by the system when you’ve done nothing wrong and in fact made the correct moral choice?

Well, why should the person whose life you took be punished by your actions when they’ve done nothing wrong and were just an unfortunate innocent bystander?

That’s not a judgement, it’s just an analogy. I am pointing out that you have already committed yourself to a system in which actions are taken not because of some intrinsic justice or fairness, but because they produce the greater good.

Lets adjust the problem slightly. Suppose you’re tied to the track along with the other person who is going to get killed when you pull the lever. Now two people will get killed when you do, and one of them is you. Five versus two, still a great deal from a moral calculus perspective, right? But I bet you’re going to think a lot harder about it.

And this is why I think it works out: When you find yourself in a situation where your reasoning is suspect, it’s very easy to think that your actions are justified if they help , and that that gets you off the hook for the consequences of them. The fact that you will always be held accountable for the consequences of your actions creates the right sort of barrier to that: By requiring an element of self-sacrifice in order to cause harm, it forces you to think about that justification much harder, and maybe decide that on balance you don’t want to have to make this call.

This doesn’t necessarily produce a better outcome in every case. Indeed, in every case where acting was the right thing to do it produces a worse outcome than  pulling the lever and getting off without punishment. What it does is in aggregate produce a better outcome, because it makes it harder for people to decide that the ends they want justify any means, and it helps to put the brakes on the excesses people commit because they decide that it was the right thing to do.

This entry was posted in Performing philosophy without a license on by .