Category Archives: Performing philosophy without a license

Reality is a countably infinite Sierpiński cube

I was thinking out loud on Twitter about what weird beliefs I hold, after realising (more or less as I was writing it) that my philosophical positions are practically banal (at least to anyone who has thought about these issues a bit, whether or not they agree with me).

I came up with a couple, but probably the most interesting (if very very niche) that I thought of is that one true and accurate mathematical model of reality is  time cube a closed, connected, subset of the countably infinite Sierpinski cube.

I consider this opinion to be not that weird and more importantly obviously correct, but I’m aware that this is a niche opinion, but hear me out.

Before we start, a quick note on the nature of reality. I am being deliberately imprecise about what I mean by “reality” here, and basically mean “any physical system”. This could be “life the universe and everything” and we are attempting to solve physics, or it could be some toy restricted physical system of interest and we are trying to nail down its behaviour. This post applies equally well to any physical system we want to be able to model.

Consider an experiment. Let’s pretend we can have deterministic experiments for convenience – you can easily work around the impossibility by making countably infinitely many copies of the experiment and considering each of them to be the answer you got the nth time you ran the experiment.

Also for simplicity we’ll assume that experiments can only have one of two outcomes (this is no loss of generality as long as experiments can only have finitely many outcomes – you just consider the finitely many experiments of the form “Was the outcome X?” – and if they have infinitely many outcomes you still need to ultimately make a finite classification of the result and so can consider the experiment composed with that classification).

There are three sensible possible outcomes you could have here:

  • Yes
  • No
  • I don’t know, maybe?

Physical experiments are inherently imprecise – things go wrong in your experiment, in your setup, in just about every bloody thing, so set of experiments whose outcome will give you total certainty is implausible and we can ignore it.

Which leaves us with experiments where one of the answer is maybe. It doesn’t matter which answer the other one is (we can always just invert the question).

So we’ve run an experiment and got an answer. What does that tell us about the true state of reality?

Well whatever reality is we must have some notion of “an approximate region” – all of our observation of reality is imprecise, so there must be some notion of precision to make sense of that.

So reality is a topological space.

What does the result of a physical experiment tell us about the state of reality?

Well if the answer is “maybe” it doesn’t tell us anything. Literally any point in reality could be mapped to “maybe”.

But if the answer is yes then this should tell us only imprecisely where we are in reality. i.e. the set of points that map to yes must be an open set.

So an experiment is a function from reality to {yes, maybe}. The set of points mapping to yes must be an open set.

And what this means is that experiments are continuous functions to the set {yes, maybe} endowed with the Sierpiński topology. The set {yes} is open, and the whole set and the empty set are open, but nothing else is.

Now let’s postulate that if two states of reality give exactly the same answer on every single experiment, they’re the same state of reality. This is true in the same sense that existing is the thing that reality does – a difference that makes no difference might as well be treated as if it is no difference.

So what we have is the following:

  1. Any state of reality is a point in the cube \(S^E\) where \(E\) is the set of available experiments and \(S = \{\mathrm{yes}, \mathrm{maybe}\}\).
  2. All of the coordinate functions are continuous functions when \(S\) is endowed with the Sierpinski topology.

This is almost enough to show that reality can be modelled as a subset of the Sierpinski cube, not quite: There are many topologies compatible with this – reality could have the discrete topology.

But we are finite beings. And what that means is that any given point in time we can have observed the outcome of at most finitely many experiments.

Each of these experiments determine where we are only in the open set of some coordinate in our cube, thus the set that the experiments have determined us to be in is an intersection of finitely many open sets in the product topology on that cube, and thus is open in that topology.

Therefore the set of states of reality that we know we are in is always an open set in the product topology. So this is the “natural” topology on reality.

So reality is a subset of a Sierpiński cube. We now have to answer two questions to get the rest of the way:

  • How many dimensions does the cube have?
  • What sort of subset is it?

The first one is easy: The set of experiments we can perform is definitely infinite (we can repeat a single experiment arbitrarily many times). It’s also definitely countable, because any experiment we can perform is one we can describe (and two experiments are distinct only up to our ability to describe that distinction), and there are only countably many sentences.

So reality is a subset of the countably infinite dimensional Sierpiński cube.

What sort of subset?

Well that’s harder, and my arguments for it are less convincing.

It’s probably not usually the whole set. It’s unlikely that reality contains a state that is just constantly maybe.

It might as well be a closed set, because if it’s not we can’t tell – there is no physical experiment we can perform that will determine that a point in the closure of reality is not in reality, and it would be aesthetically and philosophically displeasing to have aphysical states of reality that are approximated arbitrarily well.

In most cases it’s usually going to be a connected set. Why? Well, because you’re “in” some state of reality, and you might as well restrict yourself to the path component of that state – if you can’t continuously deform from where you are to another state, that state probably isn’t interesting to you even if it in some sense exists.

Is it an uncountable subset of the Sierpinski cube? I don’t know, maybe. Depends on what you’re modelling.

Anyway, so there you have it. Reality is a closed, connected, subset of the countably infinite dimensional Sierpiński cube.

What are the philosophical implications?

Well, one obvious philosophical implication is that reality is compact, path connected, and second countable, but may not be Hausdorff.

(You should imagine a very very straight face as I delivered that line)

More seriously, the big implication for me is on how we model physical systems. We don’t have to model physical systems as the Sierpiński cube. Indeed we usually won’t want to – it’s not very friendly to work with – but whatever model we choose for our physical systems should have a continuous function (or, really, a family of continuous functions to take into account the fact that we fudged the non-determinism of our experiments) from it to the relevant Sierpiński cube for the physical system under question.

Another thing worth noting is that the argument is more interesting than the conclusion, and in particular the specific embedding is more important that the embedding exists. In fact every second countable T0 topological space embeds in the Sierpinski cube, so the conclusion boils down to the fact that reality is a T0, second countable, compact, and connected (path connected really) topological space (which are more or less the assumptions we used!).

But I think the specific choice of embedding matters more than that, and the fact that we the coordinates correspond to natural experiments we can run.

And, importantly, any decision we make based on that model needs to factor through that function. Decisions are based on a finite set of experiments, and anything that requires us to be able to know our model to more precision than the topology of reality allows us to is aphysical, and should be avoided.

A statement of philosophical principles

Epistemic status: This is my philosophy. There are many like it, but this one is mine. It is not anything especially unusual or particularly sophisticated.

I’ve noticed in a couple of discussions recently that my philosophical positions are both somewhat opposed to what might be considered normal (at least among non-philosophers) and may seem internally contradictory.

It also occurred to me that I’ve never really properly explained this to people, and that it might be worth writing up a short position statement.

So, here is the philosophical basis on which I live my life. You can think of it as a rather extreme combination of moral relativism and mathematical formalism.

  1. Words, and the concepts behind them, are made up and have no objective meaning or basis.
  2. Statements cannot thus really be “true” or “false” (though they may be deductively true in the context of a certain set of premises and logic).
  3. Reality exists only because if it doesn’t then exist is not a useful word, so existing is defined mostly by reference as the thing that reality does.
  4. Perception of reality is intrinsically flawed and what we perceive may be arbitrarily far from what “really” exists (if we accept our senses, then empiricism shows us that it’s quite far. If we don’t accept our senses, we’re already there).
  5. We are probably not ever going to be capable of accurately modelling or predicting the universe. Even if it’s in principle possible we’re probably not smart enough.
  6. All value systems are subjective and culturally determined.
  7. Morality is some complex mix of value system and prediction, so it’s certainly subjective and culturally determined, but also probably beyond our ability to actually formalise in any useful way even once we’ve already pinned down a value system.

…but if we take any of that too seriously we’ll never get anything useful done, and even if there’s no objective value to getting useful things done, subjectively I’m quite fond of it, so…

  1. We should use words in a way that achieves a broad consensus and seems to be useful for talking about what we observe.
  2. Accept a reasonably large body of knowledge as a useful working premise, but occasionally backtrack and note that you’re explicitly doing that when it leads you astray or causes disagreements.
  3. Treat reality as if it exists in a naive objective sense, because it hurts when you don’t.
  4. Don’t worry about it too much. If there’s an all-powerful demon faking your perception of reality, there’s probably not much you can do about it. Also see previous item – that reality which you perceive exists (even if any given perception may not be valid), because otherwise exists isn’t a very useful word.
  5. We can do a surprisingly good job at our current level, and just because we can never achieve perfection doesn’t mean we shouldn’t try to improve what we’ve got.
  6. But I like mine, and it includes a term for a certain amount of forcing itself on other people (“hurting people is bad and I don’t really care if you think you have a culturally valid reason for doing it”).
  7. Doing the right thing is hard. Do the best you can. Don’t sweat the grand theory of morality too much, but pay attention when it comes up.

So as a result I temper the extreme relativist stance with a solid dose of pragmatic instrumental reasoning and pretend that I believe in philosophical naive realism because it’s much better at getting the job done than refusing to even acknowledge that such a thing as a job might exist and that it could be done if it did.

A lot of these theses are for me much like the way in which I am an atheist: I consider them to be obviously correct as a sort of “ground state” truth. It’s not that they’re necessarily right, it’s just that in the absence of evidence to the contrary they seem like a good default position, and nobody has provided evidence that I find convincing (and in some cases I’m not sure such evidence could exist even in principle). Maybe there’s a platonic realm of ideals after all, but formalism works perfectly well without it and if such a thing existed how could we possible know?

I probably got very excited and/or angsty about all this at one point as a teenager, but eventually I realised that maybe I just don’t care that much. Does it matter if the table I stubbed my toe on really exists? Does it matter if there is actually such a thing as a table? Either way it still hurts, and if I want something to eat my meals on I’m going to struggle to buy one from ikea without acknowledging the concept of a table. For most things I actually care about, life is just easier if I go along with naive realism.

But it’s important to me to understand that I’m just pretending. Particularly because it makes it much easier to acknowledge when I’m wrong (which I’m not always good at, but that’s not surprising. Just because I have a philosophy doesn’t mean I’m good at following it), and to understand where other people are coming from – politics is much easier to understand if you understand that value systems are subjective and arbitrary. No reason that I have to accept those values, mind you (my culturally determined subjective values frequently strongly prefer that I don’t), but it’s helpful to know where they could be coming from.

And in general I find there’s a certain useful humility to affirming that I have no access to any objective source of truth nor ever will, and that that’s OK.

This entry was posted in Performing philosophy without a license on by .

Truth and/or Justice

Disclaimer: This post is obscure in places. I apologise for that. Reasons.

Everyone likes to think they’re the protagonist of their own story. And as a hero, we need a cause. On the left and those morally aligned with the left, that can often roughly be summed up as “Truth and Justice” (we generally leave The American Way to people who wear their underpants on the outside).

(Some people are more honest, or less moral, and instead are fighting for survival, or for self-interest. This post is not about those people. Some people are less morally aligned with the left and fight for things like purity and respect for authority too. As a devout moral relativist, I acknowledge the validity of this position, but I still struggle to take it seriously. If this is you, you may get less out of this post but the broad strokes should still apply).

Unfortunately, you can’t optimise for more than one thing. Truth and Justice is not a thing you can fight for. You can fight for truth, you can fight for justice, you can fight for a weighted sum of truth and justice, but you cannot reliably improve both at once.

Often we can ignore this problem. Truth and Justice frequently work very well together. But at some point (hopefully metaphorically) the murderer is going to turn up at your door and you’re going to have to decide whether or not to lie to them. Unless you’re prepared to go full Kant and argue that lying to protect another is unjust as well as untrue, you’ll have to make a trade off.

So what’s it going to be? Truth or Justice?

It’s not an absolute choice of course – depending on the circumstances and the results of a local cost/benefit analysis, almost all of us will sometimes choose truth, sometimes justice. and sometimes we’ll make a half-arsed compromise between the two which leaves both truth and justice grumbling but mostly unbloodied.

But I think most of us have a strong bias one way or the other. This may not be inherent – it’s probably in large part driven by factionalisation of the discourse space – but certainly among my main intellectual and political influences there’s at least one group who heavily prioritises truth and another who heavily prioritises justice.

That’s not to say we don’t care about the other. Caring about justice doesn’t make you a liar, caring about truth doesn’t make you heartless. It’s just that we care about both, but we care about one more.

Personally, I tend to flip flop on it. I find myself naturally aligned with truth (I hate lying, both being lied to and lying myself), but I think I’ve historically chosen to align myself with people who prefer justice, usually by denying that the trade-off exists.

But recently I’ve been feeling the pendulum swing the other way a bit. If you’ve wondered why I’ve gone quiet on a bunch of subjects, that’s part of what’s been going on.

One of the reasons I think about this a bunch is in the context of labelling.

A long time ago now I wrote “You Are Not Your Labels“, about the problem of fuzzy boundaries and how we tend to pick a particular region in the space of possibility that includes us, use a label for that region, and then defend the boundaries of that label zealously.

I still broadly stand by this. You are not your labels. I’m certainly not my labels.

But we might be.

One of the places where truth and justice play off against each other is when you’re being attacked. If you’re under fire, now is not really the time to go “Well the reality is super complicated and I don’t really understand it but I’m pretty sure that what you’re saying is not true”. Instead, we pick an approximation we can live with for now and double down on it with a high degree of confidence.

There probably isn’t “really” such a thing as a bisexual (I’m not even wholly convinced there’s such a thing as a monosexual) – there’s a continuous multi-dimensional space in which everyone lies, and we find it operationally useful to have words that describe where we are relative to some of the boundary points in that space that almost nobody experiences perfectly.

There are as many distinct experiences of being bisexual as there are bisexuals (though, as I keep finding out, being extremely confused and annoyed by this fact seems to be a pretty common experience for us), but it sure is difficult to have an “It’s Complicated” visibility day, and it seems surprisingly easy for people to forget we exist without regular reminders.

The approximation isn’t just useful for communicating infinite complexity in a finite amount of time, it’s useful because we build solidarity around those approximations.

(This is literally why I use the label bisexual incidentally. I’m much happier with just saying “It’s complicated and unlikely to impact your life either way and when it does I would be happy to give you a detailed explanation of my preferences” but that is less useful to both me and everyone else, so I no longer do)

Another truth/justice trade off in the LGBT space is “Born this way”. I am at this point confident of precisely two things about gender and sexuality:

  • They are probably the byproduct of some extremely complicated set of nature/nurture interactions like literally everything else in the human experience.
  • Anyone who currently expresses confidence that they know how those play out in practice might be right for the n=1 sample of themself (I am generally very skeptical of people’s claims that they understand what features are natural things they were born with and what are part of their upbringing. I present the entire feminist literature on privilege as evidence in defence of my skepticism, but I also don’t find it useful or polite to have arguments with people about their personal lived experiences), but are almost certainly wrong, or at least unsupported in their claim that this holds in generality.

I would be very surprised to learn that nobody was born this way, and I have an n=1 personal data point that there are bisexuals who would probably have been perfectly able to go through life considering themselves to be straight if they hadn’t noticed that other options were available. I think it likely that there’s a spectrum of variability in between, I just don’t know.

I think among ourselves most LGBT people are more than happy to admit that this stuff is complicated and we don’t understand it, but when confronted with people who just want us to be straight and cis and consider us deviants if we dare to differ on this point, born this way is very useful – it makes homophobia less a demand to conform to societal expectations (which would still be wrong, but is harder to convince people of) and more a call for genocide. The only way to stop LGBT being LGBT is to stop us existing, and that’s not what you mean, right?

(Historically there have been many cases where that was exactly what they meant, but these days it’s harder to get away with saying so even if you think it).

Even before the latest round of fake news we’ve had in the last couple of years, demanding perfect truth in politics seems like a great way to ensure that political change belongs to those less scrupulous than you. At the absolute minimum we need this sort of lies-to-normies to take complex issues and make them politically useful if we want the world to get better.

So: Truth or Justice?

To be honest, I still don’t know. My heart says truth, but my head says justice, which I’m almost certain is hilariously backwards and not how it’s supposed to work at all, but there you go. This is complicated, and maybe “Truth or Justice” is another of those labelling things that don’t really work for me. Hoisted by my own petard.

My suspicion though is that the world is a better place if not everyone is picking the exact same trade off – different people are differently placed for improving each, and it’s not all that useful to insist that someone inclined towards one should be strongly prioritising the other. It is useful to have both people for whom justice is their top priority, and people for whom truth is their top priority, and a world where we acknowledge only one point on the spectrum as valid is probably one that ends up with less truth and less justice than one where a wider variety is pursued. Monocultures just generally work less well in the long run, even by the standards of the monoculture.

Given that, it seems like a shame that right now most of the justice prioritising people seem to think the truth prioritising people are literally Hitler and vice versa.

(To be fair, the truth prioritising people probably think the justice prioritising people are figuratively Hitler).

Calls for “Why can’t we get along?” never go well, so I won’t make one here even though you could obviously ready into this article that that’s what I want even if I didn’t include this sentence as disclaimer, so instead I’ll end with a different call to action.

I wish we would all be better about acknowledging that this trade-off exists, and notice when we are making it, regardless of what we end up deciding about the people who have chosen a different trade-off.

If you’re justice-prioritising you might not feel able to do that in public because it would detract from your goals. That’s fine. Do it in private – with a couple close friends in the same sphere to start with. I’ve found people are generally much more receptive to it than you might think.

If you’re truth-prioritising, you have no excuse. Start talking about this in public more (some of you already are, I know). If what can be destroyed by the truth should be, there is no cost to acknowledging that the truth is sometimes harmful to others and that this is a trade-off you’re deliberately making.

Regardless of what we think the optimal trade-off between truth and justice is, I’m pretty sure a world that is better on both axes than the current one is possible. I’m significantly less sure that we’re on anything resembling a path to it, and I don’t know how to fix that, but I’d like to at least make sure we’re framing the problem correctly.

An epistemic vicious circle

Let’s start with apology: This blog post will not contain any concrete examples of what I want to talk about. Please don’t ask me to give examples. I will also moderate out any concrete examples in the comments. Sorry.

Hopefully the reasons for this will become clear and you can fill in the blanks with examples from your own experience.

There’s a pattern I’ve been noticing for a while, but it happens that three separate examples of it came up recently (only one of which involved me directly).

Suppose there are two groups. Let’s call them the Eagles and the Rattlers. Suppose further that the two groups are roughly evenly split.

Now suppose there is some action, or fact, on which people disagree. Let’s call them blue and orange.

One thing is clear: If you are a Rattler, you prefer orange.

If you are an Eagle however, opinions are somewhat divided. Maybe due to differing values, or different experiences, or differing levels of having thought about the problem. It doesn’t matter. All that matters is that there is a split of opinions, and it doesn’t skew too heavily orange. Let’s say it’s 50/50 to start off with.

Now, suppose you encounter someone you don’t know and they are advocating for orange. What do you assume?

Well, it’s pretty likely that they’re a Rattler, right? 100% of Rattlers like orange, and 50% of Eagles do, so there’s a two thirds chance that a randomly picked orange advocate will be Rattler. Bayes’ theorem in action, but most people are capable of doing this one intuitively.

And thus if you happen to be an Eagle who likes orange, you have to put in extra effort every time the subject comes up to demonstrate that. It’ll usually work – the evidence against you isn’t that strong – but sometimes you’ll run into someone who feels really strongly about the blue/orange divide and be unable to convince them that you want orange for purely virtuous reasons. Even when it’s not that bad it adds extra friction to the interaction.

And that means that if you don’t care that much about the blue/orange split you’ll just… stop talking about it. It’s not worth the extra effort, so when the subject comes up you’ll just smile and nod or change it.

Which, of course, brings down the percentage of Eagles you hear advocating for orange.

So now if you encounter an orange advocate they’re more likely to be Rattler. Say 70% chance.

Which in turn raises the amount of effort required to demonstrate that you, the virtuous orange advocate, are not in fact Rattler. Which raises the threshold of how much you have to care about the issue, which reduces the fraction of Eagles who talk in favour of orange, which raises the chance that an orange advocate is actually Rattler, etc.

The result is that when the other side is united on an issue and your side is divided, you effectively mostly cede an option to the other side: Eventually the evidence that someone advocating for that option is a Rattler is so overwhelming that only weird niche people who have some particularly strong reason for advocating for orange despite being an Eagle will continue to argue the cause.

And they’re weird and niche, so we don’t mind ostracising them and considering them honorary Rattlers (the real Rattlers hate them too of course, because they still look like Eagles by some other criteria).

As you can probably infer from the fact that I’m writing this post, I think this scenario is bad.

It’s bad for a number of reasons, but one very simple reason dominates for me: Sometimes Rattlers are right (usually, but not always, for the wrong reasons).

I think this most often happens when the groups are divided on some value where Eagles care strongly about it, but Rattlers don’t care about that value either way, and vice versa. Thus the disagreement between Rattler and Eagles is of a fundamentally different character: Blue is obviously detrimental to the Rattlers’ values, so they’re in favour of orange. Meanwhile the Eagles have a legitimate disagreement not over whether those values are good, but over the empirical claim of whether blue or orange will be better according to those values.

Reality is complicated, and complex systems behave in non-obvious ways. Often the obviously virtuous action has unfortunate perverse side effects that you didn’t anticipate. If you have ceded the ground to your opponent before you discover those side effects, you have now bound your hands and are unable to take what now turns out to be the correct path because only a Rattler would suggest that.

I do not have a good suggestion for how to solve this problem, except maybe to spend less time having conversations about controversial subjects with people whose virtue you are unsure of and to treat those you do have with more charity. A secondary implication of this suggestion is to spend less time on Twitter.

But I do think a good start is to be aware of this problem, notice when it is starting to happen, and explicitly call it out and point out that this is an issue that Eagles can disagree on. It won’t solve the problem of ground already ceded, but it will at least help to stop things getting worse.


Like my writing? Why not support me on Patreon! Especially if you want more posts like this one, because I mostly don’t write this sort of thing any more if I can possibly help it, but might start doing so again given a nudge from patrons.

Determinism is topologically impossible

I’ve been doing some work on topological models of decision making recently (I do weird things for fun) and a result popped out of it that I was very surprised by, despite it being essentially just a restatement of some elementary definitions in topology.

The result is this: Under many common models of reality, there are no non-trivial deterministic experiments we can perform even if the underlying reality is deterministic.

The conclusion follows from two very simple assumptions (either of which may be wrong but both of which are very physically plausible).

  1. We are interested in some model of reality as a connected topological space \(X\) (e.g. \(\mathbb{R}^n\), some Hilbert space of operators, whatever).
  2. No experimental outcome can give us infinite precision about that model. i.e. any experimental outcome only tells us where we are up to membership of some open subset of \(X\).

Under this model, regardless of the underlying physical laws, any fully deterministic experiment tells us nothing about the underlying reality.

What does this mean?

Let \(X\) be our model of reality and let \(\mathcal{O}\) be some set of experimental outcomes. A deterministic experiment is some function \(f: X \to \mathcal{O}\).

By our finite precision assumption each of the sets \(U_o = \{x \in X: f(x) = o\}\) are open. But if \(f(x) = o\) and \(o \neq o’\) then \(f(x) \neq o’\) so \(x \not\in U_{o’}\). Therefore they’re disjoint.

But certainly \(x \in U_{f(x)}\), so they also cover \(X\).

But we assumed that \(X\) is connected. So we can’t cover it by disjoint non-empty open sets. Therefore at most one of these sets is non-empty, and thus \(X = U_o\) for some \(o\). i.e. \(f\) constantly takes the value \(o\) and as a result tells us nothing about where we are in \(X\).

Obviously this is a slightly toy model, and the conclusion is practically baked into the premise, so it might not map to reality that closely.

But how could it fail to do so?

One way it can’t fail to do so is that the underlying reality might “really” be disconnected. That doesn’t matter, because it’s not a result about the underlying reality, it’s a result about models of reality, and most of our models of reality are connected regardless of whether the underlying reality is. But certainly if our model is somehow disconnected (e.g. we live in some simulation by a cellular automaton) then this result doesn’t apply.

It could also fail because we have access to experiments that grant us infinite precision. That would be weird, and certainly doesn’t correspond to any sort of experiment I know about – mostly the thing we measure reality with is other reality, which tends to put a bound on how precise we can be.

It could also fail to be interesting in some cases. For example if our purpose is to measure a mathematical constant that we’re not sure how to calculate then we want the result of our experiment to be a constant function (but note that this is only for mathematical constants. Physical constants that vary depending on where in the space of our model we are don’t get this get out clause).

There are also classes of experiments that don’t fall into this construction: For example, it might be that \(O\) itself has some topology on it, our experiments are actually continuous functions into O, and that we can’t actually observe which point we get in \(O\), only its value up to some open set. Indeed, the experiments we’ve already considered are the special case where \(O\) is discrete. The problem with this is that then \(f(X)\) is a connected subset of \(O\), so we’ve just recursed to the problem of determining where we are in \(O\)!

You can also have experiments that are deterministic whenever they work but tell you nothing when they fail. So for example you could have an experiment that returns \(1\) or \(0\), and whenever it returns \(1\) you know you’re in some open set \(U\), but when it returns \(0\) you might or might not be in \(U\), you have no idea. This corresponds to the above case of \(O\) having a topology, where we let \(O\) be the Sierpinski space. This works by giving up on the idea that \(0\) and \(1\) are “distinguishable” elements of the output space – under this topology, the set \(\{0\}\) is not open, and so the set \(U_0\) need not be, and the connectivity argument falls apart.

And finally, and most interestingly, our experiment might just not be defined everywhere.

Consider a two parameter model of reality. e.g. our parameters are the mass of a neutron and the mass of a proton (I know these vary because binding energy or something, but lets pretend they don’t for simplicity of example). So our model space is \((0, \infty)^2\) – a model which is certainly connected, and it’s extremely plausible that we cannot determine each value to more than finite precision. Call these parameters \(u\) and \(v\).

We want an experiment to determine whether protons are more massive than neutrons.

This is “easy”. We perform the following sequence of experiments: We measure each of \(u\) and \(v\) to within a value of \(\frac{1}{n}\). If \(|u – v| > \frac{2}{n}\) then we know their masses precisely enough to answer the question and can stop and return the answer. If not, we increase \(n\) and try again.

Or, more abstractly, we know that the sets \(u > v\) and \(v < u\) are open subsets of our model, so we just return whichever one we’re in.

These work fine, except for the pesky case where \(u = v\) – protons and neutrons are equally massive. In that case our first series of experiments never terminates and our second one has no answer to return.

So we have deterministic experiments (assuming we can actually deterministically measure things to that precision, which is probably false but I’m prepared to pretend we can for the sake of the example) that give us the answer we want, but it only works in a subset of our model: The quarter plane with the diagonal removed, which is no longer a connected set!

Fundamentally, this is a result about boundaries in our models of reality – any attempt to create a deterministic experiment will run into a set like the above plane: Suppose we had a deterministic experiment which was defined only on some subset of \(X\). Then we could find some \(o\) with \(U_o\) a non-empty proper subset of \(X\). Then the set \(\overline{U} \cap U^c\) where the closure of \(U_o\) meets its complement (which is non-empty because \(X\) is connected) is a boundary like the diagonal above – on one side of it we know that \(f\) returns \(o\). On the other side we know that it doesn’t return \(o\), but in the middle at the boundary it is impossible for us to tell.

What are the implications?

Well, practically, not much. Nobody believed that any of the experiments we’re currently performing are fully deterministic anyway.

But philosophically this is interesting to me for a couple of reasons:

  1. I for one was very surprised that such a trivial topological result had such a non-trivial physical interpretation.
  2. The idea that non-determinism was some intrinsic property of measurement and not a consequence of underlying physical non-determinism is not one that had ever previously occurred to me.
  3. We need to be very careful about boundaries in our models of reality, because we often can’t really tell if we’re on them or not.
  4. It may in fact be desirable to assume that all interesting quantities are never equal unless we have a deep theoretical reason to believe them to be equal, which largely lets us avoid this problem except when our theory is wrong.

(As per usual, if you like this sort of thing, vote with your wallet and support my writing on Patreon! I mean, you’ll get weird maths posts either way, but you’ll get more weird maths posts, and access to upcoming drafts, if you do).