Category Archives: rambling nonsense

Playing to your strengths can be a weakness

If you tried to hire me as a sysadmin I would politely explain to you that you had misunderstood my skill set.

If you tried to hire me as a front end developer I would less politely explain to you that you needed your eyes checked and your head examined.

There are things that I am good at. There are even things that I am very good at. Systems administration, user experience and visual design are very emphatically not in those categories.

I’m probably more trustworthy as a sysadmin than the average dev, but not a lot. I wouldn’t like to comment on where my visual design skills are compared to the average dev – I’ve worked with plenty of people who have worse, but that’s because the teams I’ve worked in have typically had a bit of a back-end/front-end imbalance.

The point is, there are people who are good at these things, and I love working with them. It’s truly a delight working with good people who have different and complementary skill sets to your own, because you can compartmentalize problems and trust them to handle the parts you’re not good at. The reach of a good team is vastly greater than the reach of the individuals within it, and the reason is division of skills far more than it is division of labour.

But where do you draw the line?

The problem is you can easily get into a pattern of over reliance. If you always work with people who are much better at something than you, you can get into a habit of handing over even the smallest tasks to them. This can be a bottleneck even when you are working with them – your five minute task might take days to complete because the front end or opts team are busy with a major feature and doesn’t have time to do the task you need.

I’ve worked with people who basically refuse to touch the front end code. I’m not quite that bad – I’m perfectly happy to make changes to html or javascript, but I tend to draw the line where I’d have to make significant UI or style changes. I think this is more or less reasonable. Similarly I often chip in at work and do a little bit of sysadmin, but I leave any serious tasks to our actual sysadmin. So at work I don’t think my balance is too bad, though I could probably go further in each direction than I currently do.

For me, somewhere this really comes out is when I have side projects.

I have a lot of ideas. Sometimes I implement them. The problem is, what will usually happen is that I will implement them and go “Halp! Mike! I can’t make this pretty!”. Mike will then kindly step in and make it pretty for me, but I’ll probably feel like a bit of an ass about it.

The reason is that I’ve got so used to working with Mike and other people who are better at front end than I am that I’ve come to the conclusion that I’m simply not good at all. Design is just a thing I can’t do. A fact which causes me painful limitations but you’ve got to face reality, right?

Except it turns out it’s not really true.

I’m not good at design, but I can put together something entirely functional. With practice I may even upgrade myself from not bad to actually good (I doubt I’ll ever be amazing. If nothing else I don’t care enough to put in the practice).

For example, I did Github Explorer on my own and you know what, it’s fine. It’s nothing original design-wise and it sure isn’t going to win any prizes, but it doesn’t need to. It just needs to look ok and present the underlying idea.

Would it be better given the attention of a dedicated designer and front-end expert? Oh, certainly. But there are a lot of ways to make it better, and it doesn’t really need any of them. It’s fine.

Which I guess is really the core of my point: Knowing the things that you’re not good at is important. Working with people who are good at the things you are not is great, because it expands your reach. But often you are good enough at these things, and if the thing you want to achieve is already within your reach then relying on other people to help you achieve it is unnecessary, and believing that you need to do so will limit what you can achieve when those people are not available.

This entry was posted in life, rambling nonsense on by .

Two column morality

I don’t have a convincing and long essay for this. It’s more the “thought for the day”, but it’s something I’ve believed for a while.

A very tempting question to ask about courses of actions is “Does it cause more good than harm?”

I think this may be a wrong question.

It’s nice to be able to look at the world in terms of there being harm and good and one harm point is worth minus one good point. If you score positive, winning! If not, losing!

I think this may be a wrong way of looking at things.

Harm and good are not necessarily paid in the same currency, or by the same people. They don’t neatly balance out. A life saved does not balance a life lost.

A course of action is better (or equivalent) if it causes no more harm and at least as much good. If it causes more harm and more good, or less harm and less good, I think the answer is much muddier.

How do you decide in that case? I don’t know, but I think the answer is inevitably going to be more complicated than a balance sheet.

This entry was posted in rambling nonsense on by .

Constraints for wish construction

Charles Stross has recently had two posts about your classic “three wishes” style problem. The only constraint he lists is “No, you do not get to wish for further wishes”

Of course if you give a geek this constraint they will immediately do their best to subvert it.

His wishes:

1. That the outcome of my three wishes will be positive for everyone affected by them (with the definition of “positive outcome” provided by the individual so affected),

2. That anything that can be obtained by one of these magic wishes can be obtained by non-magical human efforts,

3. That nobody ever gets any more magic wishes

Personally, I feel this is cheating a bit. Anything which messes with the mechanism of wishing to this extent really feels like it’s violating the spirit of “No wishing for more wishes”. So I started thinking about how I would construct wish provisioning systems that were immune to this sort of twisting.

The following is the rules set I came up with:

  1. The implementer of the wishes is completely unable to use its abilities outside of the constraints of wishing. No coercion to obtain more wishes is thus possible
  2. Wishes are magical. That is, non-physical.
  3. Wishes may only have physical effects (effects solely in peoples’ minds count as physical. A Cartesian dualist I am not. Similarly the provision of information is totally kosher – he just hands you a book or a hard drive or writes the information into your brain or whatever) and things created as a result of wishing may not have any magical effects (no wishing for a second genie)
  4. In particular, while the mechanism for implementing wishes is not subject to physical laws, the results must be. There is no ongoing magical intervention to maintain the results of the wish

So essentially we have a two-category system for effects: While the genie may use its magical powers to achieve any physical effect, the effect of wishing is not subject to that.

In the case of a magic lamp where there is a token you can pass from person to person you are of course still able to get infinite wishes by e.g. wishing for an army of utterly loyal minions and passing the lamp from minion to minion. This is a problem easily solved by requiring an ever-growing cool-down time between new owners, which at leasts drastically limits the rate at which wishes can be performed.

This entry was posted in rambling nonsense on by .

Ethical Calculus

This is an idea that’s been brewing in my head for a while. I was waiting for it to be more fully formed before posting it, but it doesn’t seem to be going anywhere so I figured I’d just throw it out now and see what people thought. Hence this is largely a collection of half formed ideas.

The idea is this:

We have some sort of value function which measures something good about a situation for an individual. This isn’t necessarily the economists’ magic “capture everything this person cares about” sort of value function (I don’t believe in those). This is something more concrete like “life expectancy” or “cost of living adjusted wealth”. All that we care about is:

  • A value function f takes an individual i and an event E and assigns a real valued score f(i, E)
  • All other things being equal, if f(i, E) < f(i, E'), E' is "better" for that individual than E

The question is this: Given such a value function and a population, how should we create a corresponding value function F which captures the best behaviour for the population? Note that it is extremely likely that the answer to this depends on the choice of f.

(Note that it is not a priori obvious that it is reasonable to do so. I suspect it is not, but that there may be useful group value functions we can assign)

The traditional measure used by certain groups (who I shall not name lest they appear) is that of averaging the values. i.e. given a population P we choose

F(P, E) = 1/|P| sum_{i in P} F(i, E)

I consider this to be a terrible choice. Why? Because it’s completely insensitive to value distribution. If you “steal” value from someone and give it to another person, F doesn’t change. Thus F is entirely ok with the exploitation of the poor at the hands of the rich, for example.

So if we don’t like that, what do we like? How do we go about choosing a good way of assigning F?

I claim that this is an ethical question, and that the decisions you make have to be heavily informed by your code of ethics.

Do I have a concrete proof of that? No, not really. I don’t think I could provide a proof of that without an actual definition of code of ethics, and I thing it would end up a bit circular. But most of the reasons I’ve ended up judging specific examples as unacceptable are purely ethical ones – e.g. consider the reason I dislike averaging.

Let’s consider some other ethical aspects and perhaps you’ll see what I mean.

Let Q be some subset of the population P, and suppose I define F(P, E) = F(Q, E).

Consider Q to be “white people” or “men”.

That seems like a pretty awful way to judge value for a population, doesn’t it?

It could of course be even worse. You could define F(P, E) = F(Q, E) – F(P \ Q, E). i.e. doing good for anyone outside that group is considered actively disadvantageous.

So here’s a perhaps reasonable ethical principle:

Call the privileged population of F the set of individuals i such that

  • Increasing the value of f(i, E) by varying E in a way that does not change f(j, E) for any other j does not decrease F
  • Increasing the value of f(i, E) by varying E in a way that does not change f(j, E) for any other j may under some circumstances increase F

A not unreasonable principle of equality is “the whole of P is privileged”.

You could define a much stronger principle of equality: Swapping the scores of any two individuals should not affect the value of F. This is the most extreme (and probably the most fair) form of equality I can think of in this case.

It’s worth thinking about whether you actually agree with these principles: For example, what happens if you let the privileged population be “The set of people who are not serial killers”. Even if you wouldn’t support them being unprivileged for things like “life expectancy” if you’re anti death penalty, chances are good you’d not want them to be given the same treatment under freedom of movement.

(Of course, letting serial killers go free violates the “all else being equal” aspect of these considerations, so maybe you would).

Another perhaps reasonable principle can be extracted from the “privileged population” definition I had above. It contained an implicit assumption: That increasing an individual’s score should never decrease the overall score.

Do we agree with that? I don’t know. Consider for example the example of a large population living in abject poverty ruled by a well off elite. Is it really worse to have the elite better off? I don’t think it is, although I don’t necessarily think it’s better either – the problem is the massive poverty rather than how well off the elite is.

Beyond that, I’m not really sure where to go with this, other than to provide some examples of possible solutions:

  • Minimum value of f(i, E)
  • Maximum value of f(i, E) (this would be a pretty hard one to justify ethically)
  • Indeed, any x% mark of f(i, E). i.e. the maxium value such that at least x% of the population have this value. This includes the median value
  • The mean value
  • Any convex combination of otherwise acceptable values. For example the sum of the minimum and the median

Unsurprisingly, most of these look like averages, and most averages should work. Note however that the modal value doesn’t satisfy the principle that increasing the value of an individual shouldn’t decrease the value of the population.

Where am I going with this? Beats me. This is about as far as I’ve got with the thought process. Any thoughts?

This entry was posted in Ideas, rambling nonsense on by .

Axioms, definitions and agreement

A while ago I posted A Problem of Language, a response to an article claiming that Scala was not a functional language. This isn’t an attempt to revive that argument (and please don’t respond to it with such attempts. I’m likely to ignore or delete comments on the question of whether Scala is a functional language). It’s a post which is barely about programming, except by example. Really it’s a post about the philosophy of arguments.

My point was basically that without a definition of “functional language” (which no one had provided) it was a meaningless assertion to make.

Unfortunately this point isn’t really true. I think I knew that at the time of writing but glossed over it to avoid muddying the waters, as it’s false in a way that doesn’t detract from the basic point of the article, but it’s been bugging me slightly so I thought I’d elaborate on the point and the basic ideas.

Let’s start with what’s hopefully an unambiguous statement:

Brainfuck is not a functional language

Hopefully no one wants to argue the point. :-)

Well, why is brainfuck not a functional language? It doesn’t have functions!

So, we’re making the following claim:

A functional language must have a notion of function

(in order to make this fully formal you’d probably have to assert some more properties functions have to satisfy. I can’t be bothered to do that).

Hopefully this claim is uncontroversial.

But what have we done here? We’ve, based on commonly agreed statements, proved that Brainfuck is not functional without having defined “functional programming language”. i.e. my claim that you need a definition in order to meaningfully claim that a language is not functional is false.

What you need in order to make this claim is a necessary condition for the language to be functional. Then on showing that condition does not hold you have demonstrated the dysfunctionality of the language.

But how do we arrive at necessary conditions without a definition? Well, we simply assert them to be true and hope that people agree. If they do agree, we’ve achieved a basis on which we can conduct an argument. If they don’t agree, we need to try harder.

A lot of moral arguments come down to this sort of thing. Without wanting to get into details, things like arguments over abortion or homosexuality frequently come down to arguments over a basic tenet: Do you consider a fetus to be of equal value to a human life, do you consider homosexuality to be inherently wrong, etc. (what I said about arguments RE Scala holds in spades for arguments on these subjects). It’s very rare for one side to convince the other of anything by reasoned argument, because in order to construct a reasoned argument you have to find a point of agreement from which to argue and that point of agreement just isn’t there.

Mathematically speaking, what we’re talking about is an Axiom. Wikipedia says:

In traditional logic, an axiom or postulate is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths.

I consider this definition to be true, but perhaps a bit obfuscated. I’d like to propose the following definition. It’s overly informal, but I find it’s a better way to think about it:

An axiom is a point which we agree to consider true without further discussion as a basis for arriving at an agreement.

(This may give the hardcore formalists a bit of a fit. If so, I apologise. :-) It is intended to be formalist more in spirit than letter )

The most important part of this is that axioms are social tools. They don’t have any sort of deeper truth or meaning, they’re just there to form a basis for the discussion.

This entry was posted in Numbers are hard, rambling nonsense and tagged , on by .