Category Archives: programming

Where lies the problem?

The intersection between my facebook friends and people who read this blog is probably (relatively) large, but for completeness and historical record I’ll post my current facebook status:

“David is considering at what point he should start to wonder whether the problem might not be with the rest of the world.”

I get into a lot of arguments, or at least heated disagreements. Sometimes it’s because I’m wrong, but most of the time it feels like the world has reflexively thought “Oh, David has an opinion. Let’s disagree with it!”

This is very frustrating. Especially when I’m clearly right. Or, at the very least, when those I am arguing with are mouthing gibbering nonsense and ignoring my refutations of it.

This isn’t new either. It happens in whatever area I’m working. More so in programming than it did in maths, but I think that’s a function of which side of the discipline I interact with more than it is of the discipline itself.

Example points on which I’ve had arguments in ##java.

  • First class functions are a good thing.
  • Java’s type system is crap.
  • Random access strings are a poor design choice for an immutable data structure.
  • The non-trivial import features Java provides (* and static imports) are actually a good thing.
  • JEE is bloated nonsense (ok, only about half of ##java disagrees with me on this point).
  • Large scale ORM is a stupid idea, and causes the most amazing amount of grief.
  • XML may be standard, but it’s still a rubbish format and should be avoided at all costs (*waves to the RSS readers*).
  • Most recently (and inspired by this) the minor suggestion that being allowed to override void methods with non-void ones would be useful (response: Garbage arguments as to why this would be horrificially wrong and unsafe. It’s not.).
  • Endless stylistic arguments that basically boil down to “Why the Java standard approach of writing procedural code and calling it object oriented because it lives in a class is a bad one”.
  • Endless philosophical arguments that basically boil down to “Why ‘That’s not OO’ is the most useless criticism of an approach in the history of programming”.

As per the earlier quote, eventually I have to start wondering whether the reason I disagree with everyone might just not be because everyone else is wrong.

But then I come to my senses.

Firstly, these are mostly Java programmers. There are certainly competent Java programmers, but the average Java programmer is the very definition of blub. “Oh noes. It’s an anonymous function? What’s a function? How do I write that with a for loop???”. Even the good ones tend to be very set in their ways. Admittedly some of these arguments are with people who I’d otherwise have believed to be competent. However I’m going to perpetrate a no true scotsman fallacy^H make a highly reasoned distinction: If they were competent, they wouldn’t be spouting nonsense and then ignoring a detailed explanation of why they were wrong.

Secondly, the state of the the world in general and of technology in particular gives me irrefutable empirical evidence that the vast majority of people are idiots. It therefore stands to reason that a sizable proportion of the people I interact with are idiots. Not coincidentally, I get into arguments with a sizable proportion of the people I interact with.

So, in conclusion, I am correct. The problem is not with me, it is with the rest of the world. I should stop letting the opinions of idiots bother me.

By the way, this is not a deliberately over the top rant. You may be tempted to think that I’m being tongue in cheek or making some metaphorical or otherwise not entirely serious point with this post. I’m not. Deal with it.

This entry was posted in programming and tagged , , on by .

A quick check

Just trying something out for a friend, don’t mind me.



This is a test.



This entry was posted in programming on by .

I fail miserably at performance testing

I’ve now standardised my test cases to give a consistent test for each of the three options I’m playing with:

Beginning test cases...


Testing name...

   append reader: StringBuilderAdapter took 11 millis.
   toString: StringBuilderAdapter took 2 millis.
   First run for StringBuilderAdapter took 13 millis.
   Second append: StringBuilderAdapter took 5 millis
   toString: StringBuilderAdapter took 9 millis.
   Second run for StringBuilderAdapter took 14 millis.

Testing name...

   append reader: JumpRope took 6 millis.
   toString: JumpRope took 20 millis.
   First run for JumpRope took 26 millis.
   Second append: JumpRope took 0 millis
   toString: JumpRope took 7 millis.
   Second run for JumpRope took 7 millis.

Testing name...

   append reader: TrefoilString took 11 millis.
   toString: TrefoilString took 18 millis.
   First run for TrefoilString took 29 millis.
   Second append: TrefoilString took 0 millis
   toString: TrefoilString took 8 millis.
   Second run for TrefoilString took 8 millis.

These numbers… are a little different than I’d previously believed. They shouldn’t be taken as definitive though – annoying things like “If I change the order of the tests around then the JIT optimises differently and the numbers change” and other random factors make them very variable.

Still, interesting points of note:

I’m a lot closer to StringBuilder level performance than I’d previously believed. The adapter slows it down a little, but these numbers were pretty close to what I’d been seeing with StringBuilder before – it’s the others that are much lower.

For lots of small appends, StringBuilder is about twice as fast as my implementations (the numbers are really really variable). For single large appends the situation is reversed.

There’s not a lot of performance difference between my dirty but supposed to be faster and my nice high level implementations. It turns out allocation is cheap after all.

Lessons to take out of this: I suck at performance testing and I need to set up some really extensive test suites if I want to get good performance numbers. Also, I should stick to a nice high level implementation and only worry about performance as a sanity check every now and then. Ho hum.

This entry was posted in programming and tagged , , on by .

Beating StringBuilder? Hell no.

So, it turns out that StringBuilder is almost insultingly fast. It’s very well written, does almost no allocation, and is almost certainly getting special help from the VM and JIT under the covers. Appending a medium sized string to it appears to be faster (when averaged over about 11000 appends) than allocating new objects!

I’m almost tempted to try writing a mutable JumpRopeBuilder which does the same job to see if I can beat it (and I probably will anyway, but use it to generate balanced JumpRopes rather than optimise it for appends), but I think I’ll restrain myself. I must remember that hte goal is to have a String replacement rather than a StringBuilder replacement, and as that this does pretty well – the appends are very fast compared to String append (the actual conversion to a string is a little slow admittedly – it takes 100-200 ms for a approximately 100k characters, and about half of that is the overhead of iterating over an unbalanced binary tree). And, unlike StringBuilder, this is totally thread safe by virtue of being immutable.

Which reminds me, I should compare it against StringBuffer.

Anyway, I should really start adding some actual functionality now. :-)

This entry was posted in programming and tagged , , , on by .

Lazy strings for Java

I’ve started yet another open source project, motivated by the fact that Java Strings suck. The aim is to have an efficient string implementation for Java which is optimised for a more sensible set of operations. It draws on ideas from Haskell and purely functional data structures, aiming to take advantage of the fact that the JVM’s garbage collection and allocation are now ludicrously well optimised.

At the moment it has a basic set of working operations. Unfortunately the performance for one of the crucial operation blows goats. It’s very very slow for appending to the right of large strings, and causes outrageous memory overflows. smallString.append(largeString) is pretty fast (I measured it at about 50% slower than appending to the end of a StringBuilder when averaged over a medium large file).

I know what the problem is though: A stupid list implementation. I tried to stick too closely to Haskell laziness and failed miserably because the JVM doesn’t handle things like that well. I’m going to rewrite it in more idiomatic Java and with more careful attention to detail, and I hope to get the desired O(1) append in every case.

I’m debating giving Nice another try and rewriting parts of this in it, particularly the list implementation – because of how I intend to handle it, it’s the sort of thing multiple dispatch and functional programming would be *really* handy for.

Update: Ok, maybe I don’t know how to solve it. I’ll put more thought into it and see if I come up with something. As an alternative plan, someone I know pointed me towards Ropes, which is an implementation of a very similar thing for C++ / C. Very likely worth my looking at for inspiration / porting.

This entry was posted in programming and tagged , on by .