Category Archives: rambling nonsense

Novelty Requires Explanation

Epistemic status: Reasonably confident, but I should probably try to back this up with numbers about how often elementary results actually do get missed.

Attention conservation notice: More than a little rambling.

Fairly regularly you see news articles about how some long-standing problem that has stumped experts for years has been solved, usually with some nice simple solution.

This might be a proof of some mathematical result, a translation of the Voynich algorithm, a theory of everything. Those are the main ones I see, but I’m sure there are many others that I don’t see.

These are almost always wrong, and I don’t even bother reading them any more.

The reason is this: If something is both novel and interesting, it requires an explanation: Why has nobody thought of this before?

Typically, these crackpot solutions (where they’re not entirely nonsensical) are so elementary that someone would surely have discovered it before now.

Even for non-crackpot ideas, I think this question is worth asking when you discover new. As well as being a useful validity check for finding errors and problems, if there is a good answer then it can often be enlightening about the problem space.

Potentially, it could also be used as a heuristic in the other direction: If you want to discover something new, look in places where you would have a good answer to this question.

There are a couple ways this can play out, but most of them boil down to numbers: If a lot of people have been working for a problem for a long time during which they could have discovered your solution, they probably would have. As nice as it would be to believe that we were uniquely clever compared to everyone else, that is rarely the case.

So an explanation basically needs to show some combination of:

  1. Why not many people were working on the problem
  2. Why the time period during which they could have discovered your technique in is small

The first is often a bad sign! If not many people work on the problem, it might not be very interesting.

This could also be a case of bad incentives. For example, I’ve discovered a bunch of new things about test case reduction, and I’m pretty sure most of that is because not many people work on test case reduction: It’s a useful tool (and I think the problem is interesting!), but it’s a very niche problem at a weird intersection of practical needs and academic research where neither side has much of a good incentive to work on it.

As a result, I wouldn’t be surprised an appreciable percentage of person-hours ever spent on test-case reduction were done by me! Probably not 10%, but maybe somewhere in the region of 1-5%. This makes it not very surprising for me to have discovered new things about it even though the end result is useful.

More often I find that I’m just interested in weird things that nobody else cares about, which can be quite frustrating and it can make it difficult to get other people excited about your novel thing. If that’s the case, you’re probably going to have a harder time marketing your novel idea than you are discovering it.

The more interesting category of problem is the second: Why have the people who are already working on this area not previously thought of this?

The easiest way out of this is simply incremental progress: If you’re building on some recent discovery then there just hasn’t been that much time for them to discover it, so you’ve got a reasonable chance of being the first to discover it!

Another way is by using knowledge that they were unlikely to have – for example, by applying techniques from another discipline with little overlap in practice with the one the problem is form. Academia is often surprisingly siloed (but if the problem is big enough and the cross-disciplinary material is elementary enough, this probably isn’t sufficient. It’s not that siloed).

An example of this seems to be Thomas Royen’s  recentish proof of the Gaussian Correlation Inequality (disclaimer: I don’t actually understand this work). He applied some fairly hairy technical results that few people working on the problem were likely to be familiar with, and as a result was able to solve something people had been working on for more than 50 years.

A third category of solution is to argue that everyone else had a good chance of giving up before finding your solution: e.g. If the solution is very complicated or involved, it has a much higher chance of being novel (and also a much higher chance of being wrong of course)! Another way this can happen is the approach looks discouraging in some way.

Sometimes all of these combine. For example, I think the core design of Hypothesis is a very simple, elegant, idea, that just doesn’t seem to have been implemented before (I’ve had a few people dismissively tell me they’ve encountered the concept before, but they never could point me to a working implementation).

I think there are a couple reasons for this:

  1. Property-based testing just doesn’t have that many people working on it. The number might top 100, but I’d be surprised if if topped 200 (Other random testing approaches could benefit from this approach, but not nearly as much. Property-based testing implements lots of tiny generators and thus feels many of the problems more acutely).
  2. Depending on how you count, there’s maybe been 20 years during which this design could have been invented.
  3. Simple attempts at this approach work very badly indeed (In a forthcoming paper I have a hilarious experiment in which I show that something only slightly simpler than what we do completely and totally fails to work on the simplest possible benchmark).

So there aren’t that many people working on this, they haven’t had that much time to work on it, and if they’d tried it it probably would have looked extremely discouraging.

In contrast I have spent a surprising amount of time on it (largely because I wanted to and didn’t care about money or academic publishing incentives), and I came at it the long way around so I was starting from a system I knew worked, so it’s not that surprising that I was able to find it when nobody else had (and does not require any “I’m so clever” explanations).

In general there is of course no reason that there has to be a good explanation of why something hasn’t been discovered before. There’s no hard cut off line where something goes from “logically must have been discovered” to “it’s completely plausible that you’re the first” (discontinuous functions don’t exist!), it’s just a matter of probabilities. Maybe it’s very likely that somebody hasn’t discovered it before, but maybe you just got lucky. There are enough novel things out there that somebody is going to get lucky on a fairly regular basis, it’s probably just best not to count on it being you.

PS. I think it very unlikely this point is novel, and I probably even explicitly got it from somewhere else and forgot where. Not everything has to be novel to be worthwhile.

This entry was posted in rambling nonsense on by .

Truth and/or Justice

Disclaimer: This post is obscure in places. I apologise for that. Reasons.

Everyone likes to think they’re the protagonist of their own story. And as a hero, we need a cause. On the left and those morally aligned with the left, that can often roughly be summed up as “Truth and Justice” (we generally leave The American Way to people who wear their underpants on the outside).

(Some people are more honest, or less moral, and instead are fighting for survival, or for self-interest. This post is not about those people. Some people are less morally aligned with the left and fight for things like purity and respect for authority too. As a devout moral relativist, I acknowledge the validity of this position, but I still struggle to take it seriously. If this is you, you may get less out of this post but the broad strokes should still apply).

Unfortunately, you can’t optimise for more than one thing. Truth and Justice is not a thing you can fight for. You can fight for truth, you can fight for justice, you can fight for a weighted sum of truth and justice, but you cannot reliably improve both at once.

Often we can ignore this problem. Truth and Justice frequently work very well together. But at some point (hopefully metaphorically) the murderer is going to turn up at your door and you’re going to have to decide whether or not to lie to them. Unless you’re prepared to go full Kant and argue that lying to protect another is unjust as well as untrue, you’ll have to make a trade off.

So what’s it going to be? Truth or Justice?

It’s not an absolute choice of course – depending on the circumstances and the results of a local cost/benefit analysis, almost all of us will sometimes choose truth, sometimes justice. and sometimes we’ll make a half-arsed compromise between the two which leaves both truth and justice grumbling but mostly unbloodied.

But I think most of us have a strong bias one way or the other. This may not be inherent – it’s probably in large part driven by factionalisation of the discourse space – but certainly among my main intellectual and political influences there’s at least one group who heavily prioritises truth and another who heavily prioritises justice.

That’s not to say we don’t care about the other. Caring about justice doesn’t make you a liar, caring about truth doesn’t make you heartless. It’s just that we care about both, but we care about one more.

Personally, I tend to flip flop on it. I find myself naturally aligned with truth (I hate lying, both being lied to and lying myself), but I think I’ve historically chosen to align myself with people who prefer justice, usually by denying that the trade-off exists.

But recently I’ve been feeling the pendulum swing the other way a bit. If you’ve wondered why I’ve gone quiet on a bunch of subjects, that’s part of what’s been going on.

One of the reasons I think about this a bunch is in the context of labelling.

A long time ago now I wrote “You Are Not Your Labels“, about the problem of fuzzy boundaries and how we tend to pick a particular region in the space of possibility that includes us, use a label for that region, and then defend the boundaries of that label zealously.

I still broadly stand by this. You are not your labels. I’m certainly not my labels.

But we might be.

One of the places where truth and justice play off against each other is when you’re being attacked. If you’re under fire, now is not really the time to go “Well the reality is super complicated and I don’t really understand it but I’m pretty sure that what you’re saying is not true”. Instead, we pick an approximation we can live with for now and double down on it with a high degree of confidence.

There probably isn’t “really” such a thing as a bisexual (I’m not even wholly convinced there’s such a thing as a monosexual) – there’s a continuous multi-dimensional space in which everyone lies, and we find it operationally useful to have words that describe where we are relative to some of the boundary points in that space that almost nobody experiences perfectly.

There are as many distinct experiences of being bisexual as there are bisexuals (though, as I keep finding out, being extremely confused and annoyed by this fact seems to be a pretty common experience for us), but it sure is difficult to have an “It’s Complicated” visibility day, and it seems surprisingly easy for people to forget we exist without regular reminders.

The approximation isn’t just useful for communicating infinite complexity in a finite amount of time, it’s useful because we build solidarity around those approximations.

(This is literally why I use the label bisexual incidentally. I’m much happier with just saying “It’s complicated and unlikely to impact your life either way and when it does I would be happy to give you a detailed explanation of my preferences” but that is less useful to both me and everyone else, so I no longer do)

Another truth/justice trade off in the LGBT space is “Born this way”. I am at this point confident of precisely two things about gender and sexuality:

  • They are probably the byproduct of some extremely complicated set of nature/nurture interactions like literally everything else in the human experience.
  • Anyone who currently expresses confidence that they know how those play out in practice might be right for the n=1 sample of themself (I am generally very skeptical of people’s claims that they understand what features are natural things they were born with and what are part of their upbringing. I present the entire feminist literature on privilege as evidence in defence of my skepticism, but I also don’t find it useful or polite to have arguments with people about their personal lived experiences), but are almost certainly wrong, or at least unsupported in their claim that this holds in generality.

I would be very surprised to learn that nobody was born this way, and I have an n=1 personal data point that there are bisexuals who would probably have been perfectly able to go through life considering themselves to be straight if they hadn’t noticed that other options were available. I think it likely that there’s a spectrum of variability in between, I just don’t know.

I think among ourselves most LGBT people are more than happy to admit that this stuff is complicated and we don’t understand it, but when confronted with people who just want us to be straight and cis and consider us deviants if we dare to differ on this point, born this way is very useful – it makes homophobia less a demand to conform to societal expectations (which would still be wrong, but is harder to convince people of) and more a call for genocide. The only way to stop LGBT being LGBT is to stop us existing, and that’s not what you mean, right?

(Historically there have been many cases where that was exactly what they meant, but these days it’s harder to get away with saying so even if you think it).

Even before the latest round of fake news we’ve had in the last couple of years, demanding perfect truth in politics seems like a great way to ensure that political change belongs to those less scrupulous than you. At the absolute minimum we need this sort of lies-to-normies to take complex issues and make them politically useful if we want the world to get better.

So: Truth or Justice?

To be honest, I still don’t know. My heart says truth, but my head says justice, which I’m almost certain is hilariously backwards and not how it’s supposed to work at all, but there you go. This is complicated, and maybe “Truth or Justice” is another of those labelling things that don’t really work for me. Hoisted by my own petard.

My suspicion though is that the world is a better place if not everyone is picking the exact same trade off – different people are differently placed for improving each, and it’s not all that useful to insist that someone inclined towards one should be strongly prioritising the other. It is useful to have both people for whom justice is their top priority, and people for whom truth is their top priority, and a world where we acknowledge only one point on the spectrum as valid is probably one that ends up with less truth and less justice than one where a wider variety is pursued. Monocultures just generally work less well in the long run, even by the standards of the monoculture.

Given that, it seems like a shame that right now most of the justice prioritising people seem to think the truth prioritising people are literally Hitler and vice versa.

(To be fair, the truth prioritising people probably think the justice prioritising people are figuratively Hitler).

Calls for “Why can’t we get along?” never go well, so I won’t make one here even though you could obviously ready into this article that that’s what I want even if I didn’t include this sentence as disclaimer, so instead I’ll end with a different call to action.

I wish we would all be better about acknowledging that this trade-off exists, and notice when we are making it, regardless of what we end up deciding about the people who have chosen a different trade-off.

If you’re justice-prioritising you might not feel able to do that in public because it would detract from your goals. That’s fine. Do it in private – with a couple close friends in the same sphere to start with. I’ve found people are generally much more receptive to it than you might think.

If you’re truth-prioritising, you have no excuse. Start talking about this in public more (some of you already are, I know). If what can be destroyed by the truth should be, there is no cost to acknowledging that the truth is sometimes harmful to others and that this is a trade-off you’re deliberately making.

Regardless of what we think the optimal trade-off between truth and justice is, I’m pretty sure a world that is better on both axes than the current one is possible. I’m significantly less sure that we’re on anything resembling a path to it, and I don’t know how to fix that, but I’d like to at least make sure we’re framing the problem correctly.

On Efficiency

Epistemic status: I’m reasonably confident this is how the world works, but I doubt I’m saying anything very new.

Suppose you’re writing some software and you need it to run as fast as possible. What happens?

Well, often what happens is that the software becomes much more complex. Code that has been heavily optimised for performance is typically much less readable and comprehensible than the naive, simple version – a myriad of special cases and tricks are deployed in order to squeeze out every drop of performance.

That means it’s also more likely to be buggy. Complex software is harder to test and reason about, so there are many more places for bugs to hide.

Many of these bugs are likely to be security bugs – you omit checks that slow things down, you rewrite it in an unsafe language like C or C++ to squeeze out the maximum performance. In doing so, many of the things that normally guard the security of your software end up getting lost.

It also takes longer and costs more to develop. Optimisation is a hard problem that requires skill and effort and endless experimentation to get right, as well as throwing away many of the conveniences that allow you to develop software quickly.

None of this is to say that optimising software for performance is bad. Sometimes you need performance and there’s nothing else for it. The point is simply that in optimising for performance, other things suffer.

That’s not to say that all of these things have to suffer. You can optimise for some combination of performance and correctness, but you’ll probably still get slightly worse performance and slightly worse correctness than if you optimised for either on their own, and you’ll definitely get much higher cost.

When optimising for anything, some rules apply:

  1. You cannot generically optimise. Optimisation is always towards a particular target.
  2. You cannot optimise for more than one thing.
  3. You can optimise for “one thing” that is a combination of multiple things.
  4. You can also optimise for one thing given some constraints on other things.
  5. Adding more factors to your optimisation makes the problem harder and causes all of the factors to do less well.
  6. Adding more constraints to your optimisation makes the problem harder and makes the optimisation target do less well.
  7. The harder the optimisation problem, the more the factors not included in it will tend to suffer.

Better optimisation processes can decrease the amount of “wasted” effort you’re expending, but even a perfect optimisation process can only move effort around, so if you’re optimising for something by focusing effort on it, that effort has to come from somewhere and that somewhere will tend to suffer – sometimes work is genuinely wasted, but often it’s only wasted in the sense that you decided you didn’t care about what it was doing.

Less optimised processes are often more robust, because they have a lot of slack that gives room for informal structures and individual decision making. As you optimise, and make the process more legible in aid of that optimisation, that slack goes away and many of the things that you didn’t even know your system was doing go away and the system fails disastrously because one of those things turns out to have been vital.

You can see this writ large in society too.

Consider the internet of things. People are optimising for profit, so they focus on things that affect that: shininess, cost, time to market, etc.

Security? Security doesn’t affect profit much, because people don’t really care about it until well after they’ve given you money. Which is why the internet of things has become a great source of botnets.

This is a problem in general with market-based economies. In economics terms, these extra factors are externalities – things that we declare to be somebody else’s problem that we don’t have to pay for.

Much is made of the efficient-market hypothesis – the idea that markets have strong predictive power by aggregating everyone’s information. This is then often used to conclude that markets are the optimal resource allocation mechanism.

Depending on where you are in the political space, people often then conclude either “And that’s why free-market economies rule and politics should stop interfering with the market” or “And that’s why the efficient market hypothesis is false”, or some more complex variant of one of the two.

I think some form of the efficient market hypothesis is probably true, and markets are probably a pretty good optimiser for the resource allocation problem. I doubt very much that they’re perfect, but they’re probably better than most of the alternatives we have.

But remember the rules? You can’t optimise without knowing what you’re optimising for, and you can’t optimise for more than one thing.

What markets optimise for in their default state is, more or less, some money weighted average of peoples’ preferences.

A lot of the critique from those of us on the left is really objecting to that “money weighted” part, which is fair enough (or at least, fair enough when we’ve got an income distribution as unequal as we currently do), but I’d like to focus on the second part. Pretend we’ve solved the first with a universal basic income or something if you need to.

The big problem with optimising for people’s preferences is that people’s preferences are often bad.

I don’t mean this in some general objective moral sense. I mean this in the sense that often if we got more information and sat down and thought about it properly we would come to a different conclusion than we would naturally.

But we’re faced with hundreds of small decisions a day, and we can’t possibly take the time to sit down properly and think about all of them. So we mostly rely on heuristics we can apply quickly instead.

And often those are the preferences that the market ends up optimising for.

This IoT lightbulb costs £15 while this one costs £20? No obvious difference? Well, obviously I’ll buy the £15 one. Who doesn’t like saving money?

Turns out the extra cost in the £20 was to pay for their security budget, to make sure their workers got living wages and good safety conditions, etc.

Unfortunately the market optimised away from that because those terms weren’t in the optimisation target and cost was. Maybe if people had been fully aware of those factors some of them would have switched to the more expensive light bulb, but most people aren’t aware and many of those who are would still decide they didn’t care that much.

And this is more or less inevitable. It is neither practical nor desirable to expect people to think through the long-term global implications of every decision they make (though it sure would be nice if people were at least a bit better at doing this), because expecting them to be fully aware of the consequences of each decision takes currently inconsequential decisions and makes them high effort. Given the number of decisions we have to make, this adds up very quickly.

We can fix this to some degree by adding more terms to the optimisation problem – e.g. government spending on projects, liability laws for what happens when you have security problems, etc. We can similarly fix this to some degree by adding constraints – adding security regulations, workplace safety regulations, minimum wages, etc.

We can’t fix this by hoping the market will self-correct, because the market isn’t wrong. It’s working as intended.

So, by their nature, all attempts to fix this will make the market less efficient. People will pay more for the same things, because if you add new terms or constraints then your optimisation target suffers.

Free-market enthusiasts tend to use that as an argument against regulation, and to some extent it is – the market is probably better at optimising than you are, and the effects of these changes can be hard to predict. Often it will route around them because it’s easier to do something you didn’t think of than it is to do what you want. So what you will often get is a system that is less efficient and still doesn’t do what you want.

But it’s also an argument for regulation. Yes it makes markets less efficient, but when the problem is that markets are optimising too heavily for the wrong thing, making them less efficient is the point. Regulating market behaviour may be hard, but fixing people’s preferences is for all intents and purposes impossible.

We need to treat the market with caution, and to be prepared to experiment before settling on a solution. I’m not at all sure today’s regulatory environments are even close to up to the task, but as long as we are ruthlessly optimising for a goal that ignores important factors, those factors will suffer and then in the long run we will all suffer the consequences.

We could also seek another way. If efficiency is the problem and not the solution, perhaps we need to step back from the optimiser all together and find a less market dominated approach. More and better democracy would perhaps be a good starting point, but if anything we seem to find it harder to fix democratic institutions than economic ones, so I’m not terribly optimistic.

I’d like to end this post with a simple proposal to fix the problem of resource allocation and magically solve economics and democracy, but unfortunately I can’t. These problems are hard – I often refer to the issue of how to coordinate decision making in large groups of people as the fundamental problem of civilisation, and I’m not optimistic about us ever solving it.

But I’d at least like us to start having better arguments about it, and I think the starting point is better understanding the nature of the problem, and start admitting that none of our existing solutions work perfectly for solving it.

This entry was posted in rambling nonsense on by .

An epistemic vicious circle

Let’s start with apology: This blog post will not contain any concrete examples of what I want to talk about. Please don’t ask me to give examples. I will also moderate out any concrete examples in the comments. Sorry.

Hopefully the reasons for this will become clear and you can fill in the blanks with examples from your own experience.

There’s a pattern I’ve been noticing for a while, but it happens that three separate examples of it came up recently (only one of which involved me directly).

Suppose there are two groups. Let’s call them the Eagles and the Rattlers. Suppose further that the two groups are roughly evenly split.

Now suppose there is some action, or fact, on which people disagree. Let’s call them blue and orange.

One thing is clear: If you are a Rattler, you prefer orange.

If you are an Eagle however, opinions are somewhat divided. Maybe due to differing values, or different experiences, or differing levels of having thought about the problem. It doesn’t matter. All that matters is that there is a split of opinions, and it doesn’t skew too heavily orange. Let’s say it’s 50/50 to start off with.

Now, suppose you encounter someone you don’t know and they are advocating for orange. What do you assume?

Well, it’s pretty likely that they’re a Rattler, right? 100% of Rattlers like orange, and 50% of Eagles do, so there’s a two thirds chance that a randomly picked orange advocate will be Rattler. Bayes’ theorem in action, but most people are capable of doing this one intuitively.

And thus if you happen to be an Eagle who likes orange, you have to put in extra effort every time the subject comes up to demonstrate that. It’ll usually work – the evidence against you isn’t that strong – but sometimes you’ll run into someone who feels really strongly about the blue/orange divide and be unable to convince them that you want orange for purely virtuous reasons. Even when it’s not that bad it adds extra friction to the interaction.

And that means that if you don’t care that much about the blue/orange split you’ll just… stop talking about it. It’s not worth the extra effort, so when the subject comes up you’ll just smile and nod or change it.

Which, of course, brings down the percentage of Eagles you hear advocating for orange.

So now if you encounter an orange advocate they’re more likely to be Rattler. Say 70% chance.

Which in turn raises the amount of effort required to demonstrate that you, the virtuous orange advocate, are not in fact Rattler. Which raises the threshold of how much you have to care about the issue, which reduces the fraction of Eagles who talk in favour of orange, which raises the chance that an orange advocate is actually Rattler, etc.

The result is that when the other side is united on an issue and your side is divided, you effectively mostly cede an option to the other side: Eventually the evidence that someone advocating for that option is a Rattler is so overwhelming that only weird niche people who have some particularly strong reason for advocating for orange despite being an Eagle will continue to argue the cause.

And they’re weird and niche, so we don’t mind ostracising them and considering them honorary Rattlers (the real Rattlers hate them too of course, because they still look like Eagles by some other criteria).

As you can probably infer from the fact that I’m writing this post, I think this scenario is bad.

It’s bad for a number of reasons, but one very simple reason dominates for me: Sometimes Rattlers are right (usually, but not always, for the wrong reasons).

I think this most often happens when the groups are divided on some value where Eagles care strongly about it, but Rattlers don’t care about that value either way, and vice versa. Thus the disagreement between Rattler and Eagles is of a fundamentally different character: Blue is obviously detrimental to the Rattlers’ values, so they’re in favour of orange. Meanwhile the Eagles have a legitimate disagreement not over whether those values are good, but over the empirical claim of whether blue or orange will be better according to those values.

Reality is complicated, and complex systems behave in non-obvious ways. Often the obviously virtuous action has unfortunate perverse side effects that you didn’t anticipate. If you have ceded the ground to your opponent before you discover those side effects, you have now bound your hands and are unable to take what now turns out to be the correct path because only a Rattler would suggest that.

I do not have a good suggestion for how to solve this problem, except maybe to spend less time having conversations about controversial subjects with people whose virtue you are unsure of and to treat those you do have with more charity. A secondary implication of this suggestion is to spend less time on Twitter.

But I do think a good start is to be aware of this problem, notice when it is starting to happen, and explicitly call it out and point out that this is an issue that Eagles can disagree on. It won’t solve the problem of ground already ceded, but it will at least help to stop things getting worse.


Like my writing? Why not support me on Patreon! Especially if you want more posts like this one, because I mostly don’t write this sort of thing any more if I can possibly help it, but might start doing so again given a nudge from patrons.

Dividend paying sovereign wealth funds and decoupling effects

Epistemic status: Politics and economics are very far from being my area of expertise, and even to the extent that I understand them I suspect this post to be an over simplification. I think it contains the core of a good idea, but I have little to no hope of it being politically feasible.

I’ve been thinking a bunch about sovereign wealth funds recently (for values of recently that go back about three months).

The core idea of a sovereign wealth fund is that you create a state owned investment fund and fill it with tax money. Often (Usually? Always?) they are required to only invest in things outside the country,

Normally the sovereign wealth fund seems to just sit there as basically a war chest for emergencies. It was considered big news when Norway actually spent some money from their sovereign wealth fund earlier this year.

I’ve been thinking about a different use case though, which is that of the dividend paying sovereign wealth fund – one where every resident of the country receives a monthly payment out of the fund. This isn’t without precedent – the Alaska Permanent Fund more or less works like this (it pays annually rather than monthly. I’m not wed to it being monthly but I think it’s a better idea for the benefit of poorer people with less reliable income sources).

Obviously doing this significantly reduces the existing benefits of the sovereign wealth fund – if you’re spending money from it on an ongoing basis then there will be less money in it for emergencies – but I think the benefit of paying everyone a monthly dividend significantly outweighs that.

The primary reason I’ve been thinking about this is as a transitional demand towards universal basic income. Over time as the size of the wealth fund increases (due to a mix of interest, a long history of paying in and possibly/probably increased taxation) the size of the dividends will go up and the monthly payment might start to look a lot like a basic income.

This might even remove the need for a defined universal income altogether, but I’m less sure of that – a friend made the valid point that it’s a significantly less rights based approach, which has its downsides. On the other hand I suspect you’re much more easily able to argue for the rights based approach when you’re already effectively paying it out.

But I’ve been thinking about another possible benefit of the sovereign wealth fund, which is that with an additional bit of legislation it could potentially be used to smooth out some aspects of politics a lot.

The prompting thought for this is this Vox article about a carbon tax initiative opposed by the left. I don’t know to what extend the article is true, but it certainly sounds plausible: Essentially the initiative is being opposed because it is revenue neutral (it slashes an existing highly regressive tax and replaces it with a carbon tax) rather than putting the extra money into more progressive policies.

Based purely on this oversimplified description, I’m reasonably strongly on the side of the initiative, and probably would be even if my politics didn’t include a “Carbon Tax By Any Means Necessary” clause.

But in general this sort of coupling seems guaranteed to lead to a political deadlock: If you attach all your politics to all your other politics in an inseparable way, then you will be opposed by people who disagree with any of your politics. If you can decouple those somewhat then you potentially have the opportunity to find common ground that benefits everyone.

(In practice what seems to happen instead is some combination of horse trading and producing incredibly complicated bills that contain things that have nothing to do with each other as a way of striking compromises. This seems less than optimal to me, but maybe it’s secretly optimal in some way that totally makes sense if you’re a politics expert)

It occurred to me though that dividend paying sovereign wealth funds create an interesting way of achieving that decoupling.

What if you instituted a rule that any bill that gets passed must have at most one of the following three things:

  1. A change to the dividend rate of the sovereign wealth fund
  2. Changes (any combination of increases and decreases) to the tax that goes in to the sovereign wealth fund
  3. Additional expenditure that comes out of the dividends for the sovereign wealth fund

No additional expenditures or revenue sources that do not go via the sovereign wealth fund are now permitted.

The result is that all coupling between taxation and expenditure now goes through the sovereign wealth fund – they’re not decoupled per se, because they’re somewhat intrinsically coupled by constraints on government revenue – but their coupling is inherently buffered and reduced.

I think doing it this way has a number of significant advantages beyond that decoupling:

  1. Almost all increases of taxation are now significantly more progressive, and unless they would have already been incredibly regressive (e.g. taxes on public transportation, tax increases on small homes, etc) are actively progressive: Consumption is pretty correlated with wealth, so most tax increases on sale of goods will benefit the poor more. So e.g. a carbon tax is now automatically highly progressive because it just goes out to everyone and poorer people will gain more than they spend out of it.
  2. Increases in taxation should become more politically acceptable because they will generally result in you getting some or most of the money back – the richer you are the less acceptable this will be because the numbers won’t work in your favour, but it will still be more acceptable. Additionally because they won’t be coupled to increases in expenditure, they’ll be more appealing to the small government folks.
  3. Increases in expenditure now have a very simple and direct point of comparison: Is the benefit of this program greater than just giving people the money? I suspect in many cases this will decrease expenditure (if you don’t count dividend payouts as part of expenditure, which is arguable), but I also think that’s OK – it will only decrease expenditure in cases where people are significantly benefiting from the extra cash.
  4. Increases in expenditure may become more politically acceptable because they are not tied to increases in taxation (although they are tied to decreases in your dividend, which may be only slightly more acceptable, or possibly even less acceptable).

There are a whole bunch of complications that I’m completely glossing over – what counts as a new expenditure vs a change to expenditure, what do you do about changes that don’t currently require legislation, how does government debt fit into this, etc. Those would all have to be solved in order to implement something like this in practice, but I expect there are reasonable solutions.

What I suspect to be the real problem with implementing something like this is that people on the left will describe it as a libertarian power fantasy and people on the right will describe it as literally communism. It seems to be almost perfectly designed to be politically unacceptable to both sides as a mechanism for giving both sides a common ground.

But I still think it might be a good idea.

This entry was posted in rambling nonsense, World domination on by .