Quieting your Dell XPS 13’s Loud Fan

I’ve just replaced my old Dell XPS 13 with a new Dell XPS 13. I’ve never been 100% happy with the Dell XPS 13, but it’s always been a case of “Mostly very happy indeed but…”, and I decided that I didn’t want to think about the type of new computer and get used to a new set of quirks, so I decided to just go for the four year newer version rather than think about it.

I spent the first two days very very unhappy with this decision, because my computer was constantly running hot and sounding like it was trying to achieve liftoff. This was particularly pronounced when I was playing games but it also happened under such high-stress activities as “opening chrome”. I tried using Dell’s power manager (yes I’m running Windows. Most actual work happens in the Linux subsystem, but I have reluctantly concluded that I actually like Windows) to put it in quiet mode and the result was, well, less loud but also much hotter.

Anyway it turns out there’s an easy solution.

The source of the problem is that the new Dell XPS 13 has a very high res but small screen. This results in Windows’s default configuration being high res but scaled up by 300%. Unfortunately the Dell XPS 13 also has a very low end graphics card (I’m not sure if this is actually the source of the problem but I suspect it to be). This makes everything do much more work than it needs to, which is why you’re getting the heat and noise.

In order to fix this, go to the “Change the resolution of your display” system settings (if you press windows key and type “resolution” this should come up as top hit). There you will see “Scale and Layout” set to something like 300% and resolution probably set to “3840 x 2160 (Recommended)”. Change scale to 100% and resolution to one that you feel comfortable reading the screen at (mine is 1920 x 1080). Windows will moan at you about this not being recommended. Do it anyway.

Once you have done this, you should almost immediately hear your fan spin down to nothing. I’ve had no fan noise or heat problems since doing it.

PS. Thanks to David Stark for previously having shared his woes with Windows’s scaling with me, which is what allowed me to diagnose this problem so quickly. Also he has a game out today, Airships: Conquer the Skies, so if thanks to his indirect advice you are now able to play games again without your computer becoming hotter than the sun, why not celebrate with some steampunk aerial violence?

This entry was posted in Uncategorized on by .

D&D Characters I Have Known And Loved

This tweet made me nostalgic for some of my old D&D characters:

I still have fond memories of many of them, and of the reactions they got from both the D&D and many of the other characters.

I’ve been meaning to do this for a while, so I decided that I might as well let “at some point” be now. Sadly many of the details are now lost to time and memory, but here are the highlights that I remember.

I think my absolute favourite was Nicky. Nicky was a sorcerer. You could tell he was a sorcerer because he had a big floppy pointed hat and a magic wand with a big star on the end. He was 10 years old, but had come into rather a lot of power rather earlier than expected as the result of some sort of ancestral power inheritance ritual and a premature death in the family. He had been fobbed off on a long-suffering cleric family friend who basically acted as companion/baby sitter. He really wanted to go adventuring, and the pair of them were in turn fobbed off on the party by some local high ranking NPCs.

We were working on a spell points system in that campaign, and it was quite high level, so Nicky was able to be quite free with his use of magic. Mostly he only used two spells though: Fly and Polymorph Other. Sometimes when he was feeling versatile he used telekinesis.

If you are currently imagining me playing a hyperactive ten year old with more or less unlimited use of those spells, that look of absolute horror on your face is exactly the correct response. This was so much fun to play.

Nicky’s signature combat move was to stay well above the fighting, look for likely looking enemies, and shout “FROGGY. You’re a FROG!” at them. Often this would then turn out to be the case.

My absolute favourite Nicky moment was the following exchange. The party had been sent to help out some wizard. We met him at his front door. Specifically, Nicky insisted on being the one to knock.

The exchange went as following:

Nicky: Hi! I’m Nicky! I’m a sorcerer!

GM in condescending adult-talking-to-children voice: Why hello there. I’m <wizard name>. I’m not a sorcerer. I’m a wizard. Do you know the difference?

Nicky: Yes, you need to read books! That’s boring and silly! I just get to cast spells. It’s much more fun.

GM: Oh yes, and what spells do you know?

Nicky, smiling mischievously: Well… there is one spell I know.

At this point, a dawning look of horror crosses the face of the entire party, and the GM is trying hard not to crack up.

GM: Go on then, show me.

Nicky: *waves magic wand* FROGGY! YOU’RE A FROG!

GM: *critical fails saving throw*. The wizard turns into a frog. He hops around in a very annoyed manner.

Eventually the party persuaded Nicky to turn the Wizard back into a human. This proved to be a mistake as he turned out to be the villain of the story and was much harder to turn into a frog the second time around.

Nicky disappeared off into parts unknown when I had to leave the group due to disappearing off into parts London. Unfortunately they were at the time in an entirely different plane, and Nicky didn’t know planeshift. I hope he’s OK.

Sokor was another fun character. He had three major features of note:

  1. He was a wizard with a rather unconventional approach. He used a lot of force bubbles, teleport spells, and high explosives kept in a bag of holding.
  2. He was actually a pretty decent person, but coded supervillain in a massive way. Among other things he had a fairly diabolical laugh and a companion named Lacky.
  3. Sokor was a small green squirrel, due to a prior unfortunate run-in with an overpowered NPC who cursed him (i.e. because I thought it would be funny).

Sokor was from the same campaign as Nicky, but they didn’t get along due to Nicky “helpfully” trying to turn him back into a human and it not going so well. This allowed me to swap the two characters in and out as I felt like.

Eventually after some adventuring lead to us finding a really nice dungeon Sokor decided to turn it into his new lair. Also at some point he started a burger emporium – it had been a running joke that the only meat available in the local area was chicken, and Sokor spotted a business opportunity that could easily be solved with high level magic…

Sokor was separated from the party when they went dimension hopping.

One of the first characters I played as with this group was Gherran. Gherran was a wandering elven mage who the party happened across and in the usual D&D way went “OK I guess we’re BFFs now”.

NB this was a lie. Gherran was neither elvish nor a mage. He was a fallen (he preferred “tripped”) celestial with a weird and wonderful assortment of powers. His spell set was actually divine in nature, but between a suitable choice of spells and his native abilities he kept the party in the dark about this for a good six months. Eventually he was outed by an NPC, at which point an awful lot of discrepancies in my character build and options suddenly became clear to the other characters.

Gherran was overpowered as hell. The most significant of his overpowered nature was his alternate form which could pass through walls and throw bolts of energy at will (he could only assume this form a limited number of times per day). I also really enjoyed his ability to alter his appearance at will.

Unfortunately despite being overpowered he was also quite fragile. He decided to solo a demon he 100% should not have, and ended up being lost to the party. Due to his celestial nature, no rez was available. He then ended up in the wrong afterlife and things went rather downhill from there.

Marek was the character I played after Gherran. He found the party when looking for Gherran (who he knew as “the coffee maker”. I don’t really remember the story behind that).

Marek was a psion with a level dip in monk. Personality wise he was actually pretty boring, but his speciality in creation was super game breaking. He could basically create arbitrary mundane objects. This frequently lead to him doing things like building bridges over complex traps and puzzles the GM had put in our way.

He also had a bag of holding and a decanter of endless water with a “hot water” mode, so among his more mundane utilities he was able to provide hot baths for everyone who wanted them while on the campaign trail.

All of this meant that his actual combat abilities were merely OK. He wasn’t awful, but he was almost as fragile as a wizard and without the level of heavy hitting required to make up for that. Towards the end of that campaign the party were caught in a full scale battle.

Marek looked around, rated his survival odds and

Rocket Sweden GIF

Marek cast planeshift. He was never heard from again.

Those are all the most notable examples, but there were a number of characters who didn’t last as long who I still have a fond memory for.

  • There was a paladin character who was a surprising amount of fun. I decided that just because he cared about truth and justice didn’t mean he couldn’t be extremely scathing. He was constantly sarcastic and very good at dismissive putdowns of people who didn’t live up to his standards, which was most people. He also rode a griffin and wielded javelins of lightning, which was a delightful combination in combat.
  • There was the plot-critical time travelling heir to the throne. He was a fairly generic warrior build, with a specialisation in duelling and some weird trait that gave him advantages if he had something in his offhand and wasn’t too specific about what that something had to be. It was often a drink. He was irresponsible, constantly drunk, and frequently got bored and wandered off in source of a drink and/or fight. During a Very Important Meeting in which plot that was directly relevant to him was happening he decided to go try and teach the bartender how to make cocktails from his era. “Unfortunately” because he was actually very important the party ended up constantly scrabbling after him trying to keep him alive. Fortunately despite all this he was actually very effective in combat, so that wasn’t as difficult as it sounded.
  • I played a bard based on the observation that intimidate is a Charisma based skill. His specialities other than that was illusion, which due to a feat he could combine with his bardic music. Basically his only role in combat was to control the battle field with illusions that would freak the enemy out while bolstering the party’s morale. One particularly memorable fight involved him playing his fiddle to “summon” an army of dancing skeletons. Another notable scene was 100% my best ever in-character full-on rage at another PC (out of character I wasn’t too annoyed. It was pretty much in his normal range and it was funny, but there was no way my character would have let is slide). Sadly I flubbed the intimidate roll on that, even after spending an action point to reroll it, so it didn’t actually work out that well in game.

I also played a few other less notable characters. There was the tiefling warlock who never really got around to betraying the party, the other bard whose only notable achievement was a rather inappropriate deployment of a “Feather Token: Oak” on a floating city (I don’t really remember why or now it went, but that was the very first character I played with that group and it probably gave them a taste of what they were in for), and I think a few others from one-offs that were even less memorable.

Sadly I moved away from that D&D group. I’ve made a few attempts to start up again since, but it’s never really gone anywhere. It’s a fairly large time commitment, especially for the GM.

This entry was posted in Uncategorized on by .

Test-Case Selection and Choice Theory

Attention conservation notice: Honestly you probably don’t care about this post unless the title sounds really intriguing to you. I’m not sure this matters at all.

At its most abstract, the test case selection problem is a function \(T : \mathcal{P}(A) \setminus \{\emptyset\} \to A\) such that \(T(U) \in U\). i.e. we have some set \(A\) of possible test cases, a bug that occurs in some non-empty subset of the test cases, and we want to select a test case to represent the bug. Call such a \(T\) a choice function.

(The axiom of choice states that choice functions always exist, but our sets of test cases are actually always finite, so we don’t need that here).

This elides some complexity in that:

  1. Our test case selection may be non-deterministic. In this case we can (by fixing the set of random choices made) treat this as a choice-function valued random variable, so it still reduces to this.
  2. The test-case selection might fail. We can ignore this problem by assuming that it will succeed eventually (assuming it doesn’t do something silly) and just running it until it does. For some methods this may take rather a long time and we might have to worry about implementation details like the heat death of the universe, but ¯\_(ツ)_/¯

This is actually a coincidence of naming for the most part, but choice functions are interestingly related to choice theory: The problem of expressing a preference between alternatives, and describing axioms of “rationality” for doing so.

One interesting axiom of rationality is that of contraction consistency. Contraction consistency is the requirement that if \(T(A) \in B \subseteq A\) then \(T(B) = T(A)\). i.e. if you picked \(T(A)\) as the best element of \(A\), removing elements from \(A\) that aren’t \(T(A)\) shouldn’t change your opinion!

On the face of it this seems reasonable, but it actually imposes a very strong restriction on what \(T\) can look like.

Theorem: If \(T\) is contraction consistent then \(T(A) = \min\limits_\prec A\) for some total order \(\prec\).


Define \(a \prec b\) if \(a = T(\{a, b\})\). This is antisymmetric because the set \(\{a, b\}\) doesn’t depend on the order of \(a\) and \(b\), and reflexive because \((T(\{a\}) = \in \{a\}\), and total because \(T(\{a, b\} \in \{a, b\}\) so either \(a \prec b\) or \(b \prec a\).

So to show that it’s a total order we now need to show that it’s transitive. Suppose \(a \prec b\) and \(b \prec c\). Then \(T(\{a, b, c\}) = a\): If it were \(c\) then this would violate contraction consistency when considering \(\{b, c\}\), and if it were \(b\) then it would violate it when considering \(\{a, b\}\). Now, by constraction consistency, \(T(\{a, c\}) = T(\{a, b, c\}) = a\), so \(a \prec c\).

Now suppose \(b \in A\). By constraction consistency, \(T(\{T(A), b\}) = T(A)\). Therefore \(T(A) \prec b\), and so \(T(A) = \min\limits_\prec A\) as desired.


What this means in practice is that the only contraction consistent test-case selection methods must be equivalent to bounded exhaustive enumeration: You have some total ordering over your test cases, and in order to return a test case \(a \in A\) you must have verified that \(b \not\in A\) for every \(b \prec a\). This is potentially very expensive if all you have is membership queries for \(A\)! If you have more structure on \(A\) (e.g. because of symbolic execution) then you could potentially rule out this happening without actually performing those membership queries.

If you adopt the classic approach of generating a random test case then running test-case reduction on it, you will typically require substantially fewer membership queries for the set, but the \(\prec\) relationship in the above proof may not even be transitive!

Say, for example, that we have \(A = \mathbb{N}\), and our test case reduction algorithm consists of iterating the operations \(n \to n / 2\) and \(n \to n – 1\) to a fixed point as long as they remain in the set. So for example if we have \(A\) as the set of even numbers and start from \(8\) then we will go \(8 \to 4 \to 2\), having tried \(7, 3, 1\) and concluded they weren’t in the set. If we’d started from \(10\) though we’ve have become stuck because both \(10 – 1\) and \(10 / 2\) are odd. This is then an example of intransitivity, because we have \(5 \prec 10\), and \(4 \prec 5\), but \(4 \not\prec 10\), and this intransitivity is in large part responsible for our failure to find a global minimum.

(Note that it would be perfectly possible to have \(\prec\) be transitive and have \(T\) not be contraction-consistent – just make \(T\) do something different whenever \(|A| > 3\). Contraction consistency implies the transitivity of \(\prec\), but we still needed the full consistency to show that \(T(A)\) was the \(\prec\)-minimum).

This is roughly equivalent to the observation that classic choice theory only really works for logically omniscient agents: Test-case reduction is actually a boundedly rational agent that is unable (or unwilling) to run exponential time algorithms, while a logically omniscient agent is perfectly happy to do that and considers it a free action.

There is also an interesting topological perspective that allows us to firm up what we mean by \(T\) being “black box”, but I think I’ll leave that for another time.

This entry was posted in Numbers are hard, programming on by .

Notes On Eating More Vegetables

Epistemic status: Seems to be working so far, but I’m in the honeymoon period.

Content note: Food, diet, weight loss, etc.

I’m currently attempting to change my diet a bit. My goal is some mix of fat loss, energy levels, general health, and an irrational indoctrinated belief that eating healthily is a moral good that it is easier to succumb to than feel guilty about.

I pretty much treat most nutrition research and reporting on it as pure noise – it’s not that I disbelieve its conclusions per se, it’s just that the problem is so hard, the incentives so poor, and my level of expertise in the area so weak, that I’m not sure of my ability to extract signal from it. I mostly trust the information on examine.com, but am still pretty default sceptical even there.

Nevertheless, I believe the following two radical nutritional principles:

  1. Eating more vegetables is probably good for me.
  2. Eating less refined sugar is unlikely to be bad for me and maybe good for me.

(Doubtless there are people for whom either or both of these are false. e.g. if you have some major digestive problems then increasing the amount of fibre in your diet may be something you desperately need to avoid. Nevertheless I think these are true for me, and probably for most people)

I currently don’t feel like I get enough vegetables in my diet and, particularly in this hot weather, have a bit of an ice cream habit, so those are the two obvious things to address.

The vegetables issue is complicated – when I cook “properly” it tends to have a lot of vegetables in it. The issue is not that I don’t know how to cook with vegetables, it’s that my default cooking for myself is not especially proper and tends to involve a relatively low investment of effort. Thus the question is not “What can I cook with vegetables?” but “How can I make sure as many of my meals as possible have a large vegetable content?”.

The answer is not “try harder”. Trying to change behaviour on the strength of willpower alone is a mug’s game. Life inevitably settles into a local optimum, where you use the resources (time, energy, cognitive capacity, emotional cope, etc) to achieve the best outcome available to you (in terms of happiness, satisfaction, ethics, etc) with the resources available to you (skills, knowledge, possessions, people you can rely on, etc) – over time habits gradually erode and your life basically performs a hill climbing algorithm to put you in something close to the optimal configuration available to you. Trying to change your behaviour on the strength of willpower alone is constantly fighting against this process, and as a result eroding your capacity and generally making your life worse.

The thing to do instead is to change where the local optimum is. There are roughly three ways to do this, in order of increasing difficulty:

  1. Add new capabilities which change the cost of achieving various improvements.
  2. Change your value function so that what you are optimising for becomes different.
  3. Find a new local optimum that is globally better.

My current diet changing strategy is mostly option 1 with a little bit of option 2.

The little bit of option 2 is roughly that I’m attempting to change my conception of what the correct proportion of vegetables in a meal is. I wrote about my standard meta-recipe a while back, and its proportions are roughly 3 : 2 : 1 carbs, vegetables, protein. I’m attempting to convince myself that the correct proportions are something closer to 2 : 1 : 1 vegetables, carbs, protein – i.e. vegetables should make up the bulk of the dish.

The easiest way to make this change is to just do it until it becomes natural. This is something where running on “just try harder” does work, because it’s not much harder (the difficulty is remembering to do it, not forcing myself to) and it’s for a bounded amount of time – after a while either this will become natural and I’ll stop having to remind myself, or I will turn out to have been wrong about there being a local optimum in this area and have to try something else.

The bigger part of the diet change though is to find ways to make it easier to include vegetables in my diet.

The easiest way of doing this is to have vegetables that I will basically always have to hand and can easily and happily add to everything:

  1. Frozen vegetables are great. In particular frozen peas, green beans, and corn are all easy to find, cheap, and tasty. Frozen vegetables seem to be about as good as fresh (or rather “it’s complicated”), but even if they’re not you can solve that by just eating more of them and if they’re substituting for something like rice, pasta, potatoes etc then they’re going to be strictly an improvement.
  2. Pickled things. I have big jars of sauerkraut and pickled beetroot. Both are tasty and go well with most things. I am unclear on the interaction between pickling and nutrition (the answer seems to be “mostly worse than fresh veg but does some things better. Also, again, this is not substituting for fresh veg but instead for other parts of the meal.
  3. “Snacking” cucumbers – the small ones – are tastier than regular sized cucumbers, keep better, and you can just chop one up and put it on the top of a dish and that’s some extra veg right there (this isn’t much different from making a side salad, but the problem is that I don’t like most of the vegetables that people put in salads – I think leaves are a pointless vegetable, and tomatoes that have ever seen the inside of a fridge are a thing of sadness).

The nice thing about all of the above as a solution is that they work even for meals where I have zero energy. If I’m tired at the end of the day and have absolutely no motivation to cook then I might be inclined to do something lazy – fresh supermarket pasta, rice with eggs, mac and cheese, etc. I can still do that, but I can also just cut the portions in half and add a pile of veg to provide the other half with not substantially more effort.

The second category of thing is to have a repertoire of vegetable dishes that make for good leftovers. The ones I have right now are:

  1. Cauliflower cheese (which also fits into a side goal of mine of getting my body better able to deal well with brassicas again. Also frozen cauliflower florets make this a much easier dish).
  2. Cooked lentils with onions (I tend to do this in a home made chicken stock, but this could easily be made vegetarian)
  3. Coleslaw.
  4. Roast carrots.

The other thing to do is to identify specific habits that I would like to change and figure out replacement habits for them.

  1. I’ve discovered that frozen cherries are tasty and cheap and work entirely well as a substitute for ice cream cravings – rather than having ice cream I just pour out half a mug of frozen cherries and eat them directly. Not a 100% healthy habit, but a hell of a lot better than ice cream. Also yes I am aware that cherries are a fruit and not a vegetable, thank you.
  2. Often when I am feeling very very lazy my fallback meal is some sort of filled fresh supermarket pasta. I can just cut these packets in half (they’re already slightly too much food for one meal) and top it up with a large quantity of frozen peas and corn. As well as being healthier, the result is actually more interesting.

It’s also good to have easy dishes I can just lazily throw together. I’ve been using sweet potatoes with cheese on top for this (sweet potatoes totally count).

So far all of these seem to be pretty good – they’re mostly habits that I don’t think I’ll have any trouble sustaining, as long as I don’t forget they’re an option, so after a month of two of deliberately sticking with this I think my diet should slip into the new equilibrium rapidly.

Like I said, this is early days, I’ve only really been keeping this up for a week or so so I’m not really sure how well I will be able to sustain it, but it seems promising. Certainly parts of it seem like things I’ll have no trouble keeping up – preparing dishes in advance is the only one that becomes energy constrained during bad periods. The rest are largely a matter of changing my defaults to something that really I like just as much.

Is it helping? Well, hard to say. Changes from improving your diet don’t really show up that quickly, so I can’t say that this new vegetable heavy diet feels amazing and wonderful, but at the least it doesn’t feel bad – I don’t feel like I’m missing out on anything and it still feels like I’m getting enough food – and it’s helping in the sense that I feel much happier about what I’m eating.

This entry was posted in Food on by .

DFA Minimization as a Fixed Point

Attention conservation notice: Unless you fit into a very similar academic niche to me this post likely uses techniques you don’t know to solve problems you don’t care about. Even if you are in that niche this is more of a theoretical curiousity than a useful technique at present, but it was useful for me to think through this to see how some stuff connects up.

I’ve been reading a bit about probabilistic bisimilarity distances recently, and I wanted to explore how it relates to things I’ve previously studied about minimizing deterministic finite automata.

The basic trick they use is to realise the object we want (an equivalence relationship or a pseudometric in this case) and consider it as an element of some complete lattice, define a monotonic function on that lattice such that a fixed point of it will have the desired properties, and then use the Knaster-Tarski fixed point theorem to show that the set of such fixed points is no-empty. The desired object is then either the smallest or largest fixed point (both of which the theorem guarantees to exist, and indeed shows us how to construct).

This turns out to work with deterministic finite automata too!

Suppose we have some deterministic finite automaton \(M\) with states \(S = \{s_1, \ldots, s_n\}\), alphabet \(A\), transition function \(\tau : A \times S \to S\) and acceptance function \(\alpha: S \to \{0, 1\}\). We want to construct the minimal deterministic finite automaton which matches the same language by merging equivalent states. This requires us to define an equivalence relationship \(\sim\) on \(M_S\) such that \(s_i \sim s_j\) if and only if the languages starting from \(s_i\) and \(s_j\) are the same. This is called the Myhill-Nerode relationship.

We can realise this equivalence relationship as follows:

We use the observation that the set of equivalence relationships over any set forms a complete lattice (to get the upper bound of a set of equivalence relationships, just take their unions as sets and then take the transitive closure to get back to an equivalence relationship).

What we are looking for is an equivalence relationship with the following two properties:

  1. If \(s_i \sim s_j\) then \(\alpha(s_i) = \alpha(s_j)\).
  2. If \(s_i \sim s_j\) and \(a \in A\) then \(\tau(s_i, a) \sim \tau(s_j, j)\).

i.e. no accepting state is equivalent to a non-accepting state, and the equivalence relationship works with the transition function so that if two states are equivalent they transition to equivalent states given the same character.

Any such equivalence relationship is called a bisimilarity.

We can find such an equivalence relationship as follows:

Let \(E\) be the set of equivalence relations over \(S\) and define \(F: E \to E\) as \(F(R) = \{(s, t) \in S^2: \alpha(s) = \alpha(t) \wedge \forall a \in A, (\tau(a, s), \tau(a, t)) \in R\}\).

(The fact that if \(R\) is an equivalence relationship then so is \(F(R)\) is not immediately obvious, but follows from some fairly straightforward mechanical checking that I will leave as an exercise for the interested reader)

This definition may seem somewhat non-obvious, so lets unpack it:

  1. We require that the resulting equivalence relationship always respects \(\alpha\). i.e. we throw away anything we would otherwise get where it disagrees on whether the state should be accepting.
  2. We take the new relationship to be the set of pairs that the old relationship cannot distinguish any transitions from. So for example if we had states \(s_i, s_j\) such that every transition from them went to an \(R\)-equivalent state, \(F(R)\) would treat them as equivalent. Equally, if you had two R-equivalent states such that \((\tau(a, s_i), \tau(a, s_j)) \not\in R\) (i.e. R distinguishes some transition from them)

So you can think of \(F\) as taking an equivalence relationship and returning something that is in some sense “a bit closer” to being a bisimilarity. If that intuition is correct, then it should be the case that if \(R\) is a fixed point of \(F\) then it is a bisimilarity.

Suppose \(R = F(R)\). Then by construction of \(F\) we have the first property satisfied (that if \(s \sim t\) then \(\alpha(s) = \alpha(t)\).

For the second, suppose that we have \(s, t, a\) with \((\tau(a, s), \tau(a, t)) \not\in R\). Then by definition of \(F\) we musts have that \((s, t) \not\in F(R) = R\). i.e. if \((s, t) \in R\) then \((\tau(a, s), \tau(a, t)) \in R\).

So in order to find a bisimilarity we just need to find fixed points of \(F\), but this is exactly what the Knaster-Tarski fixed point theorem lets us do. \(F\) is montonic, i.e. if \(R \subseteq R’\) then \(F(R) \subseteq F(R’)\) (another “not totally obvious but a mechanical exercise for the interested reader to check” claim), the set of equivalence relationships form a complete lattice, so the set of fixed points of \(F\) also form a complete lattice.

In particular there is a greatest fixed point of \(F\), called the bisimulation relationship. This is exactly the \(\sim\) relationship we were looking for (because if a word causes two states to reach a different language then they could not have been bisimilar, so that relationship is necessarily maximal).

Additionally, the proof of the Knaster-Tarski fixed point theorem shows us how to construct this relationship: We just start with the largest equivalence relationship on \(S\) and repeatedly apply \(F\) to it until we get a fixed point (in general “repeatedly” might require transfinite induction, but because \(S\) is finite so is the set of equivalence relationships over \(F\) and as a result we only have to iterate finitely many times).

But if you squint at it right, this is actually just Hopcroft’s algorithm for DFA minimization! We start with an equivalence relationship which conflates everything, and then we split states apart as we find that they have the wrong label or lead to states that are now known to be equivalent, iterating this until we reach a point where there are no more states to split (i.e. a fixed point of \(F\)). The steps are slightly different in that we actually just split the states by labels once at the beginning and then don’t have to do so again, but because of the way \(F\) behaves in these circumstances the difference is pretty inessential (in particular we always have \(F(R) \subseteq R\) for \(R\) obtained by iterating \(F\) from the largest equivalence relationship, even though this does not hold for arbitrary \(R\)).

Is this observation useful?

Well, not really. It’s useful in the sense that it helped me understand why fixed points on lattices are such a useful technique, and it was nice to relate this back to some things I already understood. So it’s a useful exercise in pedagogy.

Are there any applications of this? Well, maybe. In the context of the probabilistic work the next step here is to look at other interesting lattices which can use similar structure, and that’s pretty interesting, but I’m not sure how useful it is in the language case. Also most of the ways in which it’s useful are likely better served by transforming it into the probabilistic case.

This entry was posted in Automata Theory on by .