Specifically, that there’s this thing called Za’atar and it’s amazing and oh god you should be eating so much of it and why aren’t you?

I first encountered it as a kid growing up in Saudi Arabia. Then we moved to England I forgot about it for ~ 10 years. At some point in my early 20s I said to my mother “So… I remember eating this thing with yoghurt and pita bread as a kid. What *was* that?”, rediscovering it as a result and finding out that it was even more amazing than I remembered.

Confusingly, Za’atar refers to two different things. It is both a species of herb (related to thyme. One of its English names is “wild thyme”. This confused me so much that it’s only in writing this blog post that I’ve understood that they aren’t the same thing and why my past experiments with making it have been so disappointing). It’s also the name of a spice mix made of said herb.

As a result the (non-recursive) basic ingredient list for Za’atar is:

- Za’atar
- Sumac
- Toasted sesame seeds
- Salt

It frequently contains other herbs and spices as well. I’ve no real idea what the “correct” ones are and it seems to vary a lot from brand to brand and person to person.

The result is this wonderful slightly sharp and spicy savoury mix.

The correct way to eat za’atar is to basically put it in large quantities on everything.

That being said, 90% of the way I eat it is much simpler: Take pita bread, dip pita bread in greek yoghurt, dip yoghurt coated pita bread in za’atar. You can also substitute olive oil or hummus for the yoghurt in this.

Other ways of eating it include making some sort of flat bread and topping it with olive oil and za’atar before baking and heaping it on top of a lightly dressed salad (just using a bit of olive oil for the dressing works well. You can also add lemon to give it a little more of a kick). There are a lot of za’atar based recipes but I’m basically such a fan of the the pita bread mode of eating that I’m always slightly hesitant to experiment in case it would be disappointing.

Which brings us to… the dark side of the za’atar.

It’s available in London, certainly. Also online. The problem is that it’s available in the same way that spices are available: You pay £2-5 for a little 50g sachet of it.

A 50g sachet of za’atar is approximately 1.5 servings. Maybe 2 if you stretch it.

This can be worked around. It takes a bit of hunting, but there are stores both physical and virtual which will sell you za’atar for a sensible price (generally speaking my rule of thumb is that you should be paying under 10% of what you’d get it for in a normal supermarket for spices – so in the above case £2-5 should buy you about 500g). There seem to be online shops for za’atar which will do this, and also some turkish supermarkets (though I’ve not been super impressed with the quality of some of the za’atar I’ve bought in London. It’s been *ok*, certainly it’s been better than not having za’atar, but it’s not been amazing. I think this may be because it’s made with a mix of actual thyme and other green spices in an attempt to approximate real wild thyme).

I can’t offer great advice on provisioning it unfortunately because my main source is friends and family in the middle east. I just want to make it clear that these are the quantities you should be buying it in.

And that if you haven’t tried it you should definitely be buying it.

]]>In June I will be joining Google (specifically the Zurich branch) to work on Knowledge Graph.

This move has not been universally popular. There are some things that Google does that have failed to endear themselves to a number of people I know (some of these I agree with. e.g. I’m definitely not a fan of the real names policy).

But… you know, they also make really good software. I don’t really acknowledge the concept of “more good than harm”, but Google do a *lot* of good, and I can’t help but see improving the quality of access to information for billions of people as both unambiguously good and more useful than any software I’ve worked on to date. So I’m pretty excited about that.

There is however one thing that I am legitimately quite concerned about in joining Google though: My primary experience of people joining Google is when blogs I read get a blog post saying “I’m joining Google, but don’t worry: I won’t fall into a black hole like everyone else who joins Google. I’ll definitely keep blogging” and then maybe they write one or two blog posts shortly after that and the next one after that is the one several years later where they announce that they’re leaving Google to move onto other things.

Well, I’m joining Google, but don’t worry: I won’t fall into a black hole like everyone else who joins Google. I’ll definitely keep blogging.

A colleague (I forget which one) said the other day that he wasn’t worried because he was pretty sure no power on earth could stop me from blogging. I’m not quite so confident. There have been some pretty long periods (I think the longest was 6 months?) in the past where I’ve not blogged at all, and it wouldn’t be surprising if I had another one.

I’d quite like that not to happen, but I’m not under any impression that I’m in some way special. Lots of other people who wanted to keep blogging also stopped.

One way in which I’m a *bit* special is that most of those blogs were purely technical, and I know that part of what stops Googlers from blogging is that it’s difficult to blog about technical things when you’re immersed in the Google ecosystem and can’t share the details without extensive clearing from the legal department. I on the other hand blog about plenty of other things – maths, feminism, fiction, voting, etc. As far as I know it should still be fine to keep blogging about all of those.

But I don’t really feel confident that that’s enough. I still haven’t entirely convinced myself that beeminder is useful (I’ve been using it to keep me reading books, but I’m not sure how much that’s helping vs just intention), but I figure I might as well give it a try. Starting beginning of May I’m going to set up a beeminder requiring me to write at least a blog post every two weeks (my normal blogging rate is more like one a week, but I figure I should give myself some slack. If I end up vastly exceeding this I may raise the rate. If this turns out to be intractable due to reasons, I may lower the rate to one a month, but I don’t think I’ll have to do that. Worst case scenario you’ll get a whole bunch more book reviews, half-baked fiction and a few “So, Switzerland. What’s up with that?” posts.

]]>(Note: Asking questions like this is the mathematics equivalent of my asking small questions approach to learning)

Eventually I realised where it was hiding. It’s not actually in the existence part of the proof, it’s in the uniqueness: If the space is *not* locally compact then you can’t cover enough points with functions of compact support and thus there will be a large chunk of the space that your functions just ignore and you can use whatever measure you like there.

More detailed proof: Let \(x\) be a point with no compact neighbourhood. Then every function \(f\) of compact support has \(f(x) = 0\) as otherwise the support of \(f\) would be a compact neighbourhood of \(x\). Therefore the measure which assigns a mass of 1 to \(x\) is indistinguishable from the \(0\) measure by integrating against functions of compact support. QED

This lead me to think about the structure of locally compact subsets of topological spaces. In particular I noticed the following:

Theorem: Let \(X\) be a regular topological space. Then there is a maximal open sets \(A \subseteq X\) such that \(A\) is locally compact in the subset topology.

Proof:

Let \(A\) be the set of points with a compact neighbourhood (that is there is open \(U \ni x\) with \(\overline{U}\) compact).

Then certainly every locally compact open subset of \(X\) is contained in \(A\): Let \(B\) be such a subset and let \(x \in B\). Then there exists \(x \in U \subseteq B\) with \(\overline{U} \subseteq B\) compact (because the closure is compact it doesn’t matter whether we mean closure in \(B\) or in \(X\)). Thus by definition of \(A\), \(x \in A\).

So we need only show that \(A\) is locally compact.

Suppose \(x\) in \(A\). Then because \(X\) is regular, we have open sets \(T, V\) with \(x \in T\), \(A^c \subseteq V\) and \(T \cap V = \emptyset\).

Now. Suppose \(x \in U\) with \(\overline{U}\) compact (such exists by definition of \(A\)). Then \(x \in U \cap T\). But \(\overline{U \cap T} \subseteq \overline{T}\) so is a closed subset of a compact space and thus compact. Further, \(\overline{U \cap T} \subseteq V^c \subseteq A\). Hence \(x\) has an open neighbourhood whose compact closure is contained in \(A\). Thus \(A\) is compact with the subset topology.

QED

So essentially \(A\) is the set of points you can distinguish with functions of compact support, right?

Well. Almost.

It turns out to be relatively easy to find an example where there is a function of compact support whose support is not contained in \(A\). In order to do this we just need to construct an example where \(\overline{A}\) is compact.

To do this we’ll glue together my two favourite examples of a locally compact space and a non locally compact space. Let \(X = \mathbb{N} \cup l^\infty\). In order to distinguish the zeros, let \(\tau\) be the 0 of \(l^\infty\).

We will give this the topology generated by the following basic open sets:

- \(\{n\}\) for \(n \in \mathbb{N}\)
- \(B(x, \epsilon)\) for \(x \in l^\infty\) with \(x \neq \tau\) and \(\epsilon > 0\)
- \([n, \infty) \cup B(\tau, \epsilon)\) for \(n \in \mathbb{N}\) and \epsilon > 0\)

where \(B(x, \epsilon)\) is the \(l^\infty\) ball.

So essentially we’re gluing together these two spaces by treating the \(0\) of \(l^\infty\) as the “point at infinity” in the one point compactification of \(\mathbb{N}\).

Then in this case \(A = \mathbb{N}\): \(\mathbb{N}\) is a locally compact open subset of \(X\) and any point \(x \not\in \mathbb{N}\) has no compact neighbourhoods (because no open subset of \(l^\infty\) has compact closure). But \(\overline{A} = \mathbb{N} \cup \{\tau\}\) which is homeomorphic to the one point compactification of \(\mathbb{N}\) and thus compact.

This then leads us to our definition of a function whose compact support is not contained in \(A\): Let \(f(n) = \frac{1}{n}\) for \(n \in \mathbb{N}\) and \(f(x) = 0\) for \(x \in l^\infty\). Then \(f\)’s support is \(\overline{A}\) which is compact, and so \(f\) has compact support.

(Note that we could have arranged for \(\overline{A}\) to be an arbitrary compactification of \(A\) using a similar construction: Take the compactification and glue a distinct copy of \(l^\infty\) to each point at infinity)

In general the set of functions of compact support on \(X\) are a subset of the set of functions which vanish at infinity on \(A\) (that is, for \(\epsilon > 0\), \(\{x : |f(x)| \geq \epsilon\}\) is compact.

Proof: Let \(\epsilon > 0\). Then \(\{x \in X : |f(x)| \geq \epsilon\} \subseteq \mathrm{supp}(f)\) so is a closed subset of a compact space and thus compact. We thus only need to show that it is a subset of \(A\) to prove the result.

But it is a subset of \(\{x \in X : |f(x)| > \frac{1}{2}\epsilon \}\) which is an open set whose closure is contained in \(\{x \in X : |f(x)| \geq \frac{1}{2}\epsilon \}\), which is compact. Every open set with compact closure is a subset of \(A\), so \(\{x \in X : |f(x)| \geq \epsilon\} \subseteq A\) as desired. Thus \(f|_A\) vanishes at infinity.

QED

Is the converse true? It turns out not. The following is true however:

Given \(f : A \to \mathbb{R}\) vanishing at infinity we can extend it to a continuous function \(f: X \to \mathbb{R}\) with support contained in \(\overline{A}\).

The obvious (and only possible) definition is to extend it with \(f(x) = 0\) for \(x \not\in A\). Does this work?

For \(y \not\in A\) the set \(U = \{x : |f(x)| \geq 0\}^c\) is an open set containing \(x\) such that for \(u \in U\), \(f(u) \in B(0, \epsilon)\), so \(f\) is continuous at \(x\) as desired.

QED

The problem is that in general there’s no reason to expect \(\overline{A}\) to be compact. Consider for example pasting \(\mathbb{N}\) and \(l^\infty\) together and *not* joining them together, just treating this as a disjoint union. Then \(A = \overline{A} = \mathbb{N}\) and the extension of the function does not have compact support.

So in general we have \(C_c(A) \subseteq C_c(X) \subseteq C_0(A)\), and it’s possible to have either or both of these inclusions be equalities (to get both you just choose \(X\) to be any locally compact space so that \(A = X\)). I’m not sure it’s possible to say more about it than that.

]]>I’d be lying if I said I fully understood all the material: It’s quite dense, and my ability to read mathematics has atrophied a lot (I’m now doing a reread of Rudin to refresh my memory). But there’s one very basic point that stuck out as genuinely interesting to me.

When introducing measure theory, it’s common to treat sigma-algebras as this annoying detail you have to suffer through in order to get to the good stuff. They’re that family of sets that it’s really annoying that it isn’t the whole power set. And we would have gotten away with it, if it weren’t for that pesky axiom of choice.

In Probability with Martingales this is not the treatment they are given. The sigma algebras are a first class part of the theory: You’re not just interested in the largest sigma algebra you can get, you care quite a lot about the structure of different families of sigma algebras. In particular you are very interested in sub sigma algebras.

Why?

Well. If I may briefly read too much into the fact that elements of a sigma algebra are called measurable sets… what are we measuring them with?

It turns out that there’s a pretty natural interpretation of sub-sigma algebras in terms of measurable functions: If you have a sigma-algebra \(\mathcal{G}\) on \(X\) and a family of measurable functions \(\{f_\alpha : X \to Y_\alpha : \alpha \in A \}\) then you can look at the the smallest sigma-algebra \(\sigma(f_\alpha) \subseteq \mathcal{G}\) for which all these functions are still measurable. This is essentially the measurable sets which we can observe by only asking questions about these functions.

It turns out that every sub sigma algebra can be realised this way, but the proof is disappointing: Given \(\mathcal{F} \subseteq \mathcal{G}\) you just consider the identify function \(\iota: (X, \mathcal{F}) \to (X, \mathcal{G})\) and \(\mathcal{G}\) is the sigma-algebra generated by this function.

One interesting special case of this is sequential random processes. Suppose we have a set of random variables \(X_1, \ldots, X_n, \ldots\) (not necessarily independent, identically distributed, or even taking values in the same set). Our underlying space then captures an entire infinite chain of random variables stretching into the future. But we are finite beings and can only actually look at what has happened so far. This then gives us a nested sequence of sigma algebras \(\mathcal{F_1} \subseteq \ldots \subseteq \mathcal{F_n} \subseteq \ldots \) where \(\mathcal{F_n} = \sigma(X_1, \ldots, X_n)\) is the collection of things we we can measure at time n.

One of the reasons this is interesting is that a lot of things we would naturally pose in terms of random variables can instead be posed in terms of sigma-algebras. This tends to very naturally erase any difference between single random variables and families of random variables. e.g. you can talk about independence of sigma algebras (\(\mathcal{G}\) and \(\mathcal{H}\) are independent iff for \(\mu(G \cap H) = \mu(G) \mu(H)\) for \(G \in \mathcal{G}, H \in \mathcal{H}\)) and two families of random variables are independent if and only if the generated sigma algebras are independent.

A more abstract reason it’s interesting is that it’s quite nice to see the sigma-algebras play a front and center role as opposed to this annoyance we want to forget about. I think it makes the theory richer and more coherent to do it this way.

]]>Where I spotted an entire book on statistical inference that I have literally no recollection of buying.

It looks pretty good, too. I haven’t actually read much of it just now (I probably have in the past), but it seems decent based on a skim through.

But it drove home a thing I’m realising recently: how little of the knowledge that is contained on my bookshelves I actually know.

Looking around the shelves there are a *lot* of books there I haven’t really read. They fall into a bunch of categories:

For some of them, this is legit – there are a lot which I’ve partially read before abandoning that path of my life (I have a lot of books on functional analysis, set theory, etc which I’m still interested in in the abstract and can’t bring myself to get rid of but honestly I will never study again).

Some of them I’ve picked up, read as much of them as I could and “Oh god this is too hard I need something simpler” and either bought a simpler book to supplement them or abandoned the subject.

Some of them *are* that simpler book, and I’ve instead ended up acquiring the information a different way and they are now too basic for me.

Some of them I’ve bought only to discover that they’re *terrible *and not really worth reading.

Some of them are reference books where the concept of having “read” them doesn’t really apply.

A lot of them though? I think what has happened is a chain of thought that goes “I wish to learn about X” *buys book about X* “Yay. I have learned about X” (In one case this is literally true. I have an unread book about Xlib up there. Fortunately I also no longer care to learn about Xlib).

There’s an idea that’s common in some fitness communities: You shouldn’t pre-announce your fitness goals, because it gives you much the same psychological rewards as actually achieving those goals, and thus makes you less likely to put in the effort to really achieve them. I don’t know if this is true – I don’t really know enough about the psychology of reward mechanisms to say (Ooh. I should buy a book on that. Wait. No. Bad David) – but it has plausibility, and I think something like that might be happening here: Surrounding myself with books about a subject in some way satisfies my desire to learn about that subject even though no learning actually takes place.

A little of this is probably healthy and normal. But there’s a *lot* of this going on with my shelves. I suspect that there’s a year of reading (spare time reading rather than solid) even if I only count the books that I actually care about still learning.

So I’m going to do two things to try and change this.

OK, three things, because I’m aware that I’m pre-announcing my goals right after I said that pre-announcing your goals is a great way to not achieve them. But the third thing is “achieve my goals despite having pre-announced them through a careful application of regularly reminding myself I haven’t achieved this goal”.

The first thing: Always have a physical non-fiction book on me when leaving the house. I can’t guarantee that this will cause me to read it, but I can guarantee that time when I *don’t *have a book with me are times that I’m not going to be reading a book.

The second thing: I was saying the other day that I didn’t really have any mid-range goals suitable for using beeminder. All the habits I’m trying to form seem to be either not worth the additional monetary stress or ones I’m already able to achieve on my own. Well, now I’m wrong, so I no longer have the excuse to not try it, so I’m trying it. I’ve committed to reading a non-fiction book every four weeks (I’ve counted my recent read of Mathematical methods in the theory of queuing to get me started). A book every four weeks should be easy. I normally read 3-4 books *per* week. Granted those are fiction, where my reading rate is absurd compared to my reading rate for non-fiction, but even so. If anything I’m hoping that it will be a pessimistically low rate and I’ll be able to raise it later,

Both of these will bias me towards shorter books (the former because longer books are *heavy*. The latter because I can’t necessarily finish a longer book in a month). If this proves to be a problem I’ll redefine my beeminder goal in terms of a “typical” book size and count book carrying as an exercise goal. Really though, I don’t have a problem with being biased towards shorter books for now: I *like* short books, and I have plenty of interesting ones to keep me going for now.

I’ve no idea if these will be sufficient to sort out this problem, but hopefully they’ll be a good start.

]]>