Category Archives: Uncategorized

Block First, Ask Questions Later

I’ve been doing something for the last six months or so that has greatly improved my experience of Twitter.

The short version is that I’ve been blocking much more often than I historically have, and it’s great.

The slightly longer version is that I ask myself the question “Would continuing interacting with this person be a net positive or a net negative for me?” and if the answer is that it’s a negative, I block them.

This still feels quite emotionally difficult. Blocking feels like a judgement on them as a human being, like I’m declaring them to be a terrible person. It’s not. All blocking means is “I do not want to interact with this person any more”, and that is 100% a right everyone has to assert for any reason. Interactions should be consensual (wherever possible – some interactions are e.g. legally necessary. Thankfully that doesn’t apply on Twitter), and you have the right to withdraw consent at any time.

It’s still hard. Sometimes I spend a couple seconds agonising over the block button, but then I press it anyway and feel an instant relief.

I think part of why this is such a big improvement on Twitter in particular is that Twitter is public, which means that there is an almost unbounded supply of people who might end up interacting with you. If a tweet gets popular, you might get hundreds (or more!) replies to it, and there’s only one of you available to deal with those replies. This makes it important to manage your time on those correctly, and blocking people is about the most efficient way of doing that (other than “Mute this conversation”, which I also can highly recommend).

It’s hard to tell whether someone is going to be a net positive or negative of course, but there are a couple of rules of thumb I apply:

  1. People I actually know almost never get blocked this way. I think I have applied this rule to exactly one mutual, and they were someone who I kinda-sorta knew and occasionally interacted with rather than a friend. I occasionally mute friends, but I try to remember to turn it off later (it would be nice if there were a timed mute button, but I think you can only do that for keywords).
  2.  The easiest way to trigger this is on a first interaction. If someone’s first interaction with me is rude and wrong, I will just block them and I won’t even feel bad about it. This is a little bit of a judgement on them as a person – this is the sort of behaviour that makes Twitter a tire fire, and participating in it is a jerk move.
  3. If someone’s first interaction with me is rude but right (in an interesting rather than trivial way) I might block them, or I might give them a chance. Depends how important I think the point they’re making is. If someone’s first interaction with me is polite but wrong then I’m unlikely to block them for it.
  4. The other thing that I occasionally block people for is when they repeatedly interact with me in some way that I find tedious. I do think I’m being rude by doing this, but the block button is way less socially painful for me to deploy than an actual conversation where I have to say “Sorry I just don’t find you very interesting”. I’d say apologies to anyone who has been on the receiving end of this, but honestly I plan to keep doing it so apologising would be a bit hypocritical.

That’s not an exhaustive list, but I think it covers most of the recurring patterns I see when doing this, and the important thing is the general principle: It’s OK to block people, and if interacting with them is going to make life worse, then blocking them will probably make your life better.

 

 

This entry was posted in Uncategorized on by .

An algorithmic puzzle

I posted the following problem on Twitter yesterday: I have intervals \(L_1, \ldots, L_m\) which are represented as half-open intervals \(L_k = [u_k, v_k)\). I know that the intervals are all distinct and that for any two intervals \(L_i, L_j\) with \(i \neq j\) either one is contained in the other or two are disjoint.

I want to sample uniformly from all pairs \(\{(i, j) : L_i \cap L_j = \emptyset \}\), or determine that there are no such pairs. How?

A solution now follows. It might not be the optimal solution, but it’s pretty close.

The two observations that make the solution work are:

  1. If every interval is disjoint from at least one other interval, then rejection sampling runs in expected time at most \(O(m)\), because the probability of accepting a random pair is at least the lowest probability of selecting \(j\) disjoint from \(i\) over all \(i\).
  2. A root interval (one containing everything else) is contained in no disjoint pair and thus can be removed from the list without changing the distribution. If there is no root interval then every interval is disjoint from at least one other.

To unpack the second observation: If there is no root interval, take some \(L_i\). Then \(L_i\) is contained in some maximal length interval. This is not a root interval, so there is some \(L_j\) that is not contained in it (and thus is disjoint from it). Such an \(L_j\) is necessarily disjoint from \(L_i\).

It’s easy to find a root interval in \(O(m)\), but removing it might introduce a new root-interval (in the degenerate case the whole list could be a chain of the form \(L_i = [i, m)\)).

Here’s how to simultaneously remove all root intervals in \(O(m \log(m))\):

First note that a root interval is necessarily of maximum length. Thus if we sort the intervals in order of decreasing length (that’s the \(O(m \log(m) \) bit. You can do it in \(O(m)\) if the interval boundaries are not-too-large intervals using bucket sort), the root intervals must form some initial prefix of the list. By renumbering where necessary lets assume that the \(L_i\) are sorted in that order.

Now calculate the intervals \(B_i = [\min\limits_{k \geq i} u_i, \max\limits_{k \geq i} v_i) \). These are the bounding intervals of the tail sets and can easily be worked out in \(O(m)\) by just iterating backwards over the list.

\(L_i\) contains all \(k \geq i\) if and only if it contains \(U_i\). So now we find the first index \(k\) that is not a root index just by iterating forward through the \(L_i\) until \(U_i\) is not contained in \(L_i\).

We now take all intervals from that point and discard everything earlier. If the set is now empty, we raise an error because there are no disjoint pairs. If it’s not empty, we can now run the rejection sampling algorithm on it as desired and have it terminate in \(O(m)\).

think there’s another solution that may sometimes be better involving explicitly tracking the tree structure and counting the number of disjoint pairs that live in each sub-interval, but the solution fell apart the more I looked at it and I wasn’t convinced it was ever actually better.

This entry was posted in Uncategorized on by .

The Clarke Gap

I coined a term on Twitter earlier and almost immediately found it useful, so I thought I’d register it somewhere a bit more permanent.

The “Clarke Gap” of a technology is the degree to which it is distinguishable from magic.

(The term of course comes Clarke’s third law that any significantly advanced technology is indistinguishable from magic, or more specifically from the corollary to it that any piece of technology distinguishable from magic is insufficiently advanced).

The Clarke gap is characterised mostly by how likely something is to go wrong in a way that requires you to understand what the system is actually doing in order to fix it. The more likely it is, the larger the Clarke gap is. If something always works and you never need to understand it, it might as well be magic. If you have to open the hood every five minutes and poke the bits that have got misaligned, it’s obviously technology. Most things are somewhere in between.

It’s not that things with a low Clarke gap don’t do wrong, but when they go wrong they tend to do it in a way that is either easy or extremely hard to fix – e.g. a computer has a fairly low Clarke gap – when it goes seriously wrong you need a lot of technical expertise to figure out what’s happened and why – but computers go wrong all the time and mostly you can just turn it off and on again (either the whole computer or the misbehaving program) and it will work.

A low Clarke gap is neither wholly good or wholly bad. It’s mostly about which cases you’re optimising for: Systems with a low Clarke gap tend to be much easier to learn at a casual usage level, but much harder to acquire expertise in.

For any given class of technology, the Clarke gap tends to go down over time. If you look at search engines now versus search engines twenty years ago, the Clarke gap has gone way down – a modern search engine is very much trying to guess your intent and thinks it knows better than you, while one from twenty years ago is basically a boolean query engine with maybe some spelling correction if you’re lucky. I suspect this has a correspondingly negative impact on people’s abilities to pick up search skills – we’re so used to search being magically good, we don’t pick up the necessarily knowledge about what to do when it’s not.

There are a number of factors that tend to push it this way. If users commonly experience a problem, the natural thing to do is to try to make that problem non-existent or easy to solve. This is good development practice! You’re making people’s lives easier. But by doing this you’re also removing a lot of intermediate ways in which the system can go wrong, and making the system more complex and thus increasing the number of more subtle ways it has to fail. This lowers the Clarke gap.

I think this is a useful phenomenon to pay attention to, and consequently to have a term for, so I’m probably going to keep using the term until a better one comes around.

This entry was posted in Uncategorized on by .

The Great Asshole Filter

(Content note: I try to keep my blog significantly more profanity light than my normal vocabulary, but “asshole” is being used as a term of art in one of the links this post is based on, and I will follow that usage).

As you’ve doubtless heard by now, Patreon are currently tilling their users into the soil with a major change that largely screws over people who rely on the $1-2/month donations (which is largely how I donate on Patreon).

Christie Koehler offers an extremely lucid explanation for why they’re doing this: It’s to avoid financial regulations that would cause them to be classed as a “money transmitter”, which would be significantly more onerous. They are not explaining this because if they admit to knowing it then it potentially increases their liability.

How did they get away with being classed as a money services business so far? Well, basically by ignoring the problem and hoping the problem would ignore them. It’s the standard startup thing: Ignore the regulations until you can’t, then either comply with or attempt to change the regulations.

I’ve been cooling a bit on regulation over the last couple of years. I still fundamentally believe that we need something to fetter the runaway optimisation process that is the free market (though I’m definitely also not on team full communism now), but I think it’s increasingly clear (at least in the UK and the USA. I’m less clued in on politics elsewhere, though even where it seems better I’m not sure it seems better enough) that whatever it is we’re doing right now isn’t working, and the current generation of governments is fundamentally untrustworthy to create regulation (see: Attempts to regulate crypto by mandating back doors).

Anyway, all these thoughts were bouncing around in my head and they latched on to something seemingly unrelated that I read a while ago (and very strongly agree with): The idea of an asshole filter (Ironically, I support the author on Patreon at a $2 level, to publish on Live Journal, another platform with a history of tilling their users into the soil).

The idea of the asshole filter is this: If you have strongly defined boundaries which you erratically enforce, then you will mostly be surrounded by people who are willing to violate those boundaries.

And it occurred to me that regulation and startup culture are currently interacting in exactly this way, especially in anything remotely fintech or employment related: There are large regulations which it is profitable to ignore, and we are erratically enforcing them. Thus many of the startups who succeed are the ones who are prepared to violate boundaries for as long as they can get away with it.

This isn’t a Fully General Theory Of Why Startups Are Mostly Assholes – e.g. I’m not currently aware of any regulations Twitter is skirting the boundary of (though I suspect that a lot of news agencies are probably using it to do an end run around various social norms of reporting if not regulations – it’s basically an environment where they can lie as much as they do in the headlines and get away with it because character limit. Maybe 280 will help with that? Note: 280 won’t help with that) – but I suspect it accounts for a lot of the worst behaviour. If we create an environment where only assholes can win, we probably shouldn’t be surprised when assholes take advantage of that to win.

I’m not sure what to do about this. In many ways the fact that current regulations are not enforced on small businesses is a feature – requiring full regulatory compliance from day one basically means you’re never going to successfully start a business due to the level of overhead required to comply with it. Sometimes this is even formalised in terms of size requirements on businesses, which would at least help make regulation less of an asshole filter though wouldn’t do anything to stop users getting tilled into the soil as soon as the regulation kicks in.

In theory this could be fixed with better regulation (maybe with strictly less regulation, but it’s not like I trust the banks to behave themselves if unregulated either), but right now I don’t trust our governments to do anything that is obviously in people’s best interests, let alone things that require a complex argument based on social dynamics to even notice might be an improvement. I also have literally no control over what our government does despite ostensibly living in a democracy, so even if this were a solution there wasn’t anything I could do about it.

So the  best I can offer is a word of warning: That hot new startup that is doing something that nobody else is offering despite there being many players in a similar space? It’s probably because they’re assholes, and even if they’re not right now, success is probably going to force them to till you into the soil.

That doesn’t mean you can’t use them, but maybe you shouldn’t unless you want to look forward to becoming fertiliser.

This entry was posted in Uncategorized on by .

Non-fiction

I’m going to be experimenting with a thing this December: I will be (mostly) giving up consuming fiction.

What this means:

  • I’m not going to read any fiction books
  • I’m not going to watch any TV or movies
  • I’m not going to play any video games

I’m going to allow myself the following exceptions:

  • I have a couple of updating serials I read (webcomics, a few web novels) that I will continue to keep up with as they come out (but won’t e.g. go back and read archives of)
  • I will probably watch the Dr Who Christmas Special.

Also board games are fine, as they’re fundamentally a social activity rather than novelty on tap like the above.

This isn’t a firm commitment. I might give up if I find it too hellish. It’s just an experiment. I’m writing about it partly for general interest and partly to make myself slightly more likely to stick to it.

I’m not doing this because I think there’s anything wrong with these things. I’m certainly not doing it with an intent to make this a permanent change, yikes.

The reason I’m doing this is that these are things that I can spend an unbounded amount of time on: If I’m not quite sure what to do, I’ll read some fiction. Sometimes I’ll watch TV. This means that it’s easy to put things off that I don’t want to do because there’s always something I can procrastinate with.

For example, I have been intending for literally six months to start meditating. I have books on it and everything. Have I started? Ha, no. I’ve dug my heels in so hard.

And one of the reasons for that is that it’s always something I can do later, that I don’t want to do now. Maybe not having something to fill my time with will bore me into submission make me more likely to try it.

I’d also like to get back to writing more. Despite this being my second blog post today, I’ve let the blog lapse a bit since starting my PhD. Some of that is because I now have other writing commitments, but I’d still like to fix that.

In general I would like to see what I fill my time with when I do not have an automatic source of things to reach for. I don’t expect the answer will be especially high value – I suspect it’s mostly going to be Twitter and annoying the cats – but it will be interesting to see.

Update: This turned out to be very poorly timed. I’ve given up on this for now, but will attempt to resume at some point in the new year. It seemed to work very well for a few days before disruption started, but I think needs better planning.

This entry was posted in Uncategorized on by .