Category Archives: Ideas

An interesting experiment that no one seems to have performed

A thing I am interested in is group problem solving and group intelligence. When pondering this, the following interesting experiment occurred to me. I can’t find any evidence of it ever having been done unfortunately (yes, I could run it, but I’m not really in the right career for that…):

The basic idea is simple. You take some sort of untimed intelligence test – e.g. Raven’s Progressive Matrices might be a good choice (I don’t know enough about psychometrics to say for sure if it is or not).

You now take a largish sample of people. Each of these people takes the test.

Now you do something different. You pair the people off, and they take the test collaboratively. i.e. two people are put together in a room with the test and are asked to solve it together.

The question is this: How does the score of pairs relate to the score of individuals? Does this depend on any priming on how to collaborate you give them? (e.g. if one person has absolute deciding factor versus if they flip a coin on disagreements). Is it larger than the maximum of their two scores? Smaller?

You’d have to do some careful experiment design to watch for training effects – e.g. do half of the individual tests after the paired tests rather than before and see if it makes a difference – but I think most of these problems can be overcome with careful experiment design.

This entry was posted in Ideas, rambling nonsense on by .

Ethical Calculus

This is an idea that’s been brewing in my head for a while. I was waiting for it to be more fully formed before posting it, but it doesn’t seem to be going anywhere so I figured I’d just throw it out now and see what people thought. Hence this is largely a collection of half formed ideas.

The idea is this:

We have some sort of value function which measures something good about a situation for an individual. This isn’t necessarily the economists’ magic “capture everything this person cares about” sort of value function (I don’t believe in those). This is something more concrete like “life expectancy” or “cost of living adjusted wealth”. All that we care about is:

  • A value function f takes an individual i and an event E and assigns a real valued score f(i, E)
  • All other things being equal, if f(i, E) < f(i, E'), E' is "better" for that individual than E

The question is this: Given such a value function and a population, how should we create a corresponding value function F which captures the best behaviour for the population? Note that it is extremely likely that the answer to this depends on the choice of f.

(Note that it is not a priori obvious that it is reasonable to do so. I suspect it is not, but that there may be useful group value functions we can assign)

The traditional measure used by certain groups (who I shall not name lest they appear) is that of averaging the values. i.e. given a population P we choose

F(P, E) = 1/|P| sum_{i in P} F(i, E)

I consider this to be a terrible choice. Why? Because it’s completely insensitive to value distribution. If you “steal” value from someone and give it to another person, F doesn’t change. Thus F is entirely ok with the exploitation of the poor at the hands of the rich, for example.

So if we don’t like that, what do we like? How do we go about choosing a good way of assigning F?

I claim that this is an ethical question, and that the decisions you make have to be heavily informed by your code of ethics.

Do I have a concrete proof of that? No, not really. I don’t think I could provide a proof of that without an actual definition of code of ethics, and I thing it would end up a bit circular. But most of the reasons I’ve ended up judging specific examples as unacceptable are purely ethical ones – e.g. consider the reason I dislike averaging.

Let’s consider some other ethical aspects and perhaps you’ll see what I mean.

Let Q be some subset of the population P, and suppose I define F(P, E) = F(Q, E).

Consider Q to be “white people” or “men”.

That seems like a pretty awful way to judge value for a population, doesn’t it?

It could of course be even worse. You could define F(P, E) = F(Q, E) – F(P \ Q, E). i.e. doing good for anyone outside that group is considered actively disadvantageous.

So here’s a perhaps reasonable ethical principle:

Call the privileged population of F the set of individuals i such that

  • Increasing the value of f(i, E) by varying E in a way that does not change f(j, E) for any other j does not decrease F
  • Increasing the value of f(i, E) by varying E in a way that does not change f(j, E) for any other j may under some circumstances increase F

A not unreasonable principle of equality is “the whole of P is privileged”.

You could define a much stronger principle of equality: Swapping the scores of any two individuals should not affect the value of F. This is the most extreme (and probably the most fair) form of equality I can think of in this case.

It’s worth thinking about whether you actually agree with these principles: For example, what happens if you let the privileged population be “The set of people who are not serial killers”. Even if you wouldn’t support them being unprivileged for things like “life expectancy” if you’re anti death penalty, chances are good you’d not want them to be given the same treatment under freedom of movement.

(Of course, letting serial killers go free violates the “all else being equal” aspect of these considerations, so maybe you would).

Another perhaps reasonable principle can be extracted from the “privileged population” definition I had above. It contained an implicit assumption: That increasing an individual’s score should never decrease the overall score.

Do we agree with that? I don’t know. Consider for example the example of a large population living in abject poverty ruled by a well off elite. Is it really worse to have the elite better off? I don’t think it is, although I don’t necessarily think it’s better either – the problem is the massive poverty rather than how well off the elite is.

Beyond that, I’m not really sure where to go with this, other than to provide some examples of possible solutions:

  • Minimum value of f(i, E)
  • Maximum value of f(i, E) (this would be a pretty hard one to justify ethically)
  • Indeed, any x% mark of f(i, E). i.e. the maxium value such that at least x% of the population have this value. This includes the median value
  • The mean value
  • Any convex combination of otherwise acceptable values. For example the sum of the minimum and the median

Unsurprisingly, most of these look like averages, and most averages should work. Note however that the modal value doesn’t satisfy the principle that increasing the value of an individual shouldn’t decrease the value of the population.

Where am I going with this? Beats me. This is about as far as I’ve got with the thought process. Any thoughts?

This entry was posted in Ideas, rambling nonsense on by .