Tag Archives: API

Not really LÖVING it

So I spent yesterday playing around with LÖVE, a really cute little platform for writing 2D games. It got linked on reddit on friday, and it looked neat so I thought I’d give it a try.

As you can tell from the title, this is not going to be a very positive post. So in order to brighten the mood I’m going to start by saying positive things about it:

  • I’ve been saying for ages that I wanted to play around with creating little games. I’ve never gotten around to doing so. LÖVE made it sufficiently easy for me to get started that I no longer had an excuse for not doing so.
  • It’s really very good at getting all the annoying cruft of creating a basic game out of the way.
  • I had a lot of fun playing with it.
  • I’m really glad I gave it a try, and if you’ve never written a game before and want to, you probably should too.

Also, the website is adorable.

Unfortunately, that’s about where the positive side of things stops.

First off, the biggest problem with it is that it’s really unreliable. It will sit on a CPU and eat 100% of it, for even the smallest of games. It seemed very crashy – on my work PC it tended to randomly crash other applications. People who tried to play my games said that it seems to crash itself and other things when you try to close it (this was under both macs and windows. I never had this problem under linux). Between these two factors I’d basically never consider it for a real game. Hopefully both will be fixed at some point. (Yes, yes, I know, open source, I should look into it. Unfortunately it seems to be written in C++, which I have approximately zero interest in writing).

The other major problem with it is that it’s actually not very powerful. What it basically comes down to is an infrastructure for loading files, an ok graphics API and fairly basic physics and sound APIs (there’s also an animation API which I didn’t bother to look at. It may be better than the others, I don’t know). So although it helps you get started it doesn’t really do much for you once you have.

As an example to illustrate the above: I had a bit of a play with the physics engine. Basically I created a green square that could jump about (he’s a very abstract representation of Victoria’s very abstract representation of a frog). That worked quite nicely – he could jump, he could move from side to side, he could land on stuff.

The point at which it stopped working nicely was when I wanted him to not be able to jump while already in mid air. See, it turns out that the physics engine has no way (or rather no way exposed in the API) of detecting if two bodies were in contact, so I couldn’t easily detect if he was on the ground or not. I asked in #loveclub on freenode (a much less dubious channel than the name might suggest), and various things were suggested. The best idea I encountered was checking vertical distance from the point of contact. This unfortunately proves to have a bunch of problems as well. The method I eventually settled on was that he could only jump if his vertical speed was already zero (actually, < 0.001) and hope that the player’s reactions weren’t good enough to jump during hang time.

At this point I more or less decided I’d had enough of the physics engine. I went back to the first game I’d made, turned it into something that was actually a proper game (it hadn’t previously been possible to lose. Now it’s only really hard to lose) and declared myself all out of LÖVE. It was a fun little encounter, and I do hope they do well, but unfortunately I’m not really left with any desire to use it.

If you want to check out the code I wrote for this, it’s available from the github repo. Be warned though: It’s really bad. It started out slightly crummy because I was just exploring the API, and then in that last “right, let’s get something actually playable” push I started factoring things out nicely before I realised that I was never going to touch the code again and just brutally copied and pasted and hacked until it worked. I should also warn you that the games themselves are correspondingly lame. :-)

(As an aside, also in that repo are today’s experiments at recreating something like the Love API but more reliable and/or powerful in Scala. But more on those later)

This entry was posted in programming and tagged , on by .

Lessons learned in class

(Warning: I’m half asleep, and this post is somewhere between a brain dump and a rant. Coherency is strictly optional).

So, my latest random personal project has turned into a bit of a debacle.

I decided I wanted a Java bytecode manipulation library with a decent Scala API. The options were either “Write my own” or “Write bindings to an existing one”. I chose something of a middle ground: “Port an existing one”. Rather than go for any of the normal big names I went for an obscure little internal library at EPFL called FJBG (Fast Java Bytecode Generator). It’s basically a low level interface onto the classfile format, and I’d used it before for code generation (e.g. for the structural proxies stuff) and found it pretty straightforward. Kindof hard to debug programming errors, but otherwise pretty robust.

One slight flaw: No test suite to speak of. But that’s ok, it’s used as part of the compiler backend for scalac, so I assume it gets relatively well covered by the scalac test suite. And it’s been around for quite a while, so has had itme to stabilise. Should be fine.

Right?

Right?

Anyway, the initial porting process went pretty smoothly. I was astonished at how smoothly in fact – after about 6 hours of work I had the bytecode generation code working in Scala and prettified to have nicer method names, etc. Pretty good going. I was frankly astonished – I basically ran it through jatran, spent about 6 hours fixing compiler errors and then at the end it took about 10 minutes of bug fixing before it just worked. Not bad. The only slight problem was that the class file parsing code wasn’t working.

The problem was that the way the code worked there was a fairly deeply nested inheritance strategy, and maintained two constructor hierarchies – one for creating things in memory, one for creating them from a DataInputStream. because of the way Scala handles constructors this is essentially impossible to do in Scala.

I’ve never thought this was a problem before, but this seemed to me to be quite a reasonable thing to do and I started to have doubts about Scala’s approach to constructors. I still have some, but not to the point that I previously had. The thing is, this approach is really fragile. It means that each constructor needs to balance the class’s invariants in different ways – you’ve given yourself twice as many opportunities to screw up.

Anyway, after some struggling with approaches I eventually (took me several times as long as the previous part of the porting) got this ported in a reasonably straightforward way. It wasn’t the prettiest code ever, but the mapping onto the original wasn’t bad. So I tried it out on a few simple tests – generate a class file, read it back in again, compare them to make sure you got approximately the same thing.

Hm. And it didn’t work. How curious.

I stared at the implementation for a bit, stared at the original Java, couldn’t see a difference. So I ran the same test on the original Java and it broke in the same way. Great.

That turned out to be an easy fix. But it was an easy fix to a problem very definitely caused by the multiple constructor hierarchy. Oh well, that worked now.

Next part of the test. Write the newly read class file to a file, load it and try to run it.

Oops. It NPEs when I try to write the file. Guess I did something wrong – I wonder why that array is null there. Looks like the logic for initialising it is rather complex, lets see how the original Java version handles this. So I wrote a simplified test case using the original which took a class file, read it to the in memory representation and wrote it out again and tested it against a random class file. It broke. In a totally different way to the way my version did – it didn’t even manage to read the file (I think the difference here is that this was a classfile found in the wild rather than one generated by FJBG). Tried it on a different, simpler one – Specifically the class generated by the obvious HelloWorld.java. That broke too.

So at this point I was forced to conclude that the class file reading code in FJBG just didn’t work at all. What the hell? Wasn’t this used in the Scala compiler? Clearly it has to be able to parse class files in order to know what’s available on the classpath to compile against!

So, some digging through the compiler source later: scalac doesn’t use FJBG’s class reading code at all. It has its own entirely separate code for that. So this code which I thought was part of a fairly mature and robust compiler backend was in fact completely and utterly untested and unused. No wonder it was broken.

So, new rule (to many of you, a very old rule): If it’s library code and it’s not tested, it’s broken. An application you can judge by “Does it do the right thing?” to at least get some sense of how not broken it is. Besides, I only have to use it, no code against it. But if my code is going to depend on yours, yours better be tested.

I’m usually pretty bad at tests actually. Applications I’ve written are certainly woefully undertested. SBinary’s tests are… well, adequate. And I don’t really recommend depending on any other libraries I’ve written – they’re all a bit incomplete and half assed. :-) Hopefully this will teach me to be better.

At this point I was already rather upset with FJBG’s object model – too mutable, too many back references. So on top of fixing the reading code I was going to have to fix that. At this point I decided that it was time to cut my losses, so I’m going to go back to option 1: Write my own. I’ll certainly reuse what I can salvage from the FJBG code (assuming some worries I have about licensing are resolved), but honestly the class file format is pretty easy. The overarching format took me two hours to write a parser for (I did it the same night as discovering that . The bytecode format for method bodies is harder, but I expect to be able to reuse FJBG code for this bit (and probably write a fair bit of my own).

Anyway, hopefully this will turn out to be a good thing and I’ll end up with something much more scalic than a straight port of FJBG would have been. We’ll see. Watch this space to see if anything comes of this, and watch this repo to keep an eye on the code.

This entry was posted in programming and tagged , , on by .

Scala arrays

Update: Note that this is about the old implementation and no longer applies.

There’s been a thread on Scala debate in the last day or two about breaking the current array implementation. It’s in most regards a very tedious thread, but one thing that’s emerged in the course of the thread is that people don’t really understand how Scala arrays work. This isn’t surprising – they’re a bit confusing. It looks like they must be horribly inefficient, but actually they’re only mildly inefficient and much of the time the compiler can remove even that and treat them as normal Java arrays.

This post is basically a “What’s going on here?” post. I’ll explain some of the basics of how Scala arrays work and what you can expect in terms of performance. Warning: This post will contain JVM bytecode. If that scares you… well, close your eyes while reading or something. :-)

There are essentially two distinct cases here. The case where the arrays are polymorphic (depend on a type parameter we don’t know the class of. e.g. an Array[T]) and the case where they’re monomorphic (depend on a fully realised type parameter. e.g. an Array[Int] or an Array[String]. I’m also going to count things like Array[List[T]], where the class is known even though the actual type depends on a parameter. This is technically wrong, but works for the purpose of this discussion).

The issue here is that Scala supports polymorphic arrays where the type parameter ranges across both object and value types. This is interesting, because the JVM doesn’t. Consequently it comes with some overhead in the implementation. Occasionally this is a significant performance hit.

Ok. So, Scala is boxing arrays to make them appear polymorphic. It must be generating horribly inefficient code to do so, right? Let’s take a look to see what happens:

Consider the following method:

def foo = {
val bar = new Array[Int](3);
bar(0) = 1;
bar;

}

So, we’re expecting something involving boxing ints to and from objects. Let’s use javap to look at the generated bytecode.

public int[] foo();
Code:
0:    iconst_3
1:    newarray int
3:    astore_1
4:    aload_1
5:    iconst_0
6:    iconst_1
7:    iastore
8:    aload_1
9:    areturn


Oh, um. So, not much boxing huh?If you can’t read bytecode, what this is doing is basically exactly what a Java compiler using the corresponding code for int[] would generate. It’s using the correct instructions for creating and storing things into a primitive int array.

Object arrays behave similarly:

def foo = {
val bar = new Array[String](3);
bar(0) = "fish";
}

public java.lang.String[] foo();
Code:
0:    iconst_3
1:    anewarray    #14; //class java/lang/String
4:    astore_1
5:    aload_1
6:    iconst_0
7:    ldc    #16; //String fish
9:    aastore
10:    aload_1
11:    areturn

This correctly generates a String[] as you’d expect.

When you have a primitive array, Scala generates pretty efficient bytecode for it – it just uses a native Java primitive array. Similarly when you have an object array of a known class type.

Note that it’s specifically the classtype that has to be known. The actual type is of no importance.Here’s another example of it doing the right thing:

def foo[T] = {
val bar = new Array[List[T]](3);
bar(0) = Nil;
bar
}

public scala.List[] foo();
Code:
0:    iconst_3
1:    anewarray    #14; //class scala/List
4:    astore_1
5:    aload_1
6:    iconst_0
7:    getstatic    #20; //Field scala/Nil$.MODULE$:Lscala/Nil$;
10:    aastore
11:    aload_1
12:    areturn

Multidimensional arrays behave similarly:

  def foo = {
    val bar = new Array[Array[Int]](10, 10);
    bar(0)(3) = 1;
    bar;
  }
public int[][] foo();
  Code:
   0:   ldc     #13; //int 10
   2:   ldc     #13; //int 10
   4:   multianewarray  #15,  2; //class "[[I"
   8:   astore_1
   9:   aload_1
   10:  iconst_0
   11:  aaload
   12:  iconst_3
   13:  iconst_1
   14:  iastore
   15:  aload_1
   16:  areturn

So far, nothing very interesting to see here. It works exactly like you’d expect it to. Where the magic starts to happen is when you have an Array[T].

So, what happens if we use one of these things polymorphically?

def foo = {
val baz = new Array[Int](3);
bar(baz, 3);
}

def bar[T](ts : Array[T], t : T) = ts(0) = t;

Clearly we need to do something. In Java this would just work by using an Object[] under the scenes, but for Scala we can’t do that as we need to be able to pass it an array of primitives.

So what we do is we box the array. We create a standard wrapper which can take both a primitive array and an object array and present an array like API for both. This is in fact the thing you’re seeing when you see the API for scala.Array – all the methods like map, flatMap, projection, etc. belong to the BoxedArray wrapper class (or one of its superclasses).

Let’s see how this looks at the bytecode level:

public void bar(scala.runtime.BoxedArray, java.lang.Object);
Code:
0:    aload_1
1:    iconst_0
2:    aload_2
3:    invokevirtual    #18; //Method scala/runtime/BoxedArray.update:(ILjava/lang/Object;)V
6:    return

public void foo();
Code:
0:    iconst_3
1:    newarray int
3:    astore_1
4:    aload_0
5:    new    #28; //class scala/runtime/BoxedIntArray
8:    dup
9:    aload_1
10:    invokespecial    #31; //Method scala/runtime/BoxedIntArray."":([I)V
13:    iconst_3
14:    invokestatic    #37; //Method scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer;
17:    invokevirtual    #41; //Method bar:(Lscala/runtime/BoxedArray;Ljava/lang/Object;)V
20:    return

So, the method bar just generates the expected code using the BoxedArray class. On calling this code, we first create an int[] as normal, we box it with a BoxedIntArray (an implementation of BoxedArray for wrapping an int[]) and call the method as expected. There’s a bit of boxing overhead, but it’s not too bad. The main point is that at the backend we are still using a primitive array for storage.

Ok. This is all very well, but what happens if we need to create a polymorphic array? How can I do new Array[T](3) if I don’t know what T is so don’t know whether it needs to be a primitive or an object array how can I create this?

The key observation here is that as long as we are polymorphic in the array type we must box anyway. So you never have to expose a native Java array until a point where you know what the actual type is. Until that point we use a custom boxed array implementation called BoxedAnyArray. What happens here is that it starts out using an Object[] until it knows what type it is. At the point where we request it as a Java native array we now have a concrete array type we want it as. Consequently we create an array of that type at that point, populate it from the Object[] and change the BoxedAnyArray in place to now be backed by that array instead of the Object[] it originally had. The Object[] is now discarded.

Because the BoxedAnyArray’s backing store is now the newly created (possibly primitive) array, any updates to and from the boxed version will now point to this new array. Hence the native array and the boxed array will remain in synch, even though this isn’t the array it started out with. Further, and this is important, after this point we have a native array in our hands and can treat it accordingly. Anything that was using the BoxedAnyArray continues to do so, but there is no “taint” associated with our newly created array and in particular no additional overhead.

Unfortunately this trick doesn’t come for free. BoxedAnyArray is kindof a slow beast (I think some of this can be fixed with an additional level of indirection actually, but it currently isn’t). It does a lot of checking in the apply and update methods and, additionally, they’re synchronized. While you continue to use BoxedAnyArray your performance will suffer much more than the other BoxedArray implementations.

Let’s read some more bytecode:

def newarray[T] = new Array[T](3);

def foo = {
val x = newarray[Int]
x(0) = 3;
x
}

public scala.runtime.BoxedArray newarray();
Code:
0:    new    #14; //class scala/runtime/BoxedAnyArray
3:    dup
4:    iconst_3
5:    invokespecial    #17; //Method scala/runtime/BoxedAnyArray."":(I)V
8:    areturn

public int[] foo();
Code:
0:    getstatic    #25; //Field scala/runtime/ScalaRunTime$.MODULE$:Lscala/runtime/ScalaRunTime$;
3:    aload_0
4:    invokevirtual    #29; //Method newarray:()Lscala/runtime/BoxedArray;
7:    getstatic    #35; //Field java/lang/Integer.TYPE:Ljava/lang/Class;
10:    invokevirtual    #39; //Method scala/runtime/ScalaRunTime$.arrayValue:(Lscala/runtime/BoxedArray;Ljava/lang/Class;)Ljava/lang/Object;
13:    astore_2
14:    aload_2
15:    instanceof    #41; //class scala/runtime/BoxedArray
18:    ifeq    37
21:    getstatic    #25; //Field scala/runtime/ScalaRunTime$.MODULE$:Lscala/runtime/ScalaRunTime$;
24:    aload_2
25:    checkcast    #41; //class scala/runtime/BoxedArray
28:    getstatic    #35; //Field java/lang/Integer.TYPE:Ljava/lang/Class;
31:    invokevirtual    #39; //Method scala/runtime/ScalaRunTime$.arrayValue:(Lscala/runtime/BoxedArray;Ljava/lang/Class;)Ljava/lang/Object;
34:    goto    38
37:    aload_2
38:    checkcast    #43; //class "[I"
41:    astore_1
42:    aload_1
43:    iconst_0
44:    iconst_3
45:    iastore
46:    aload_1
47:    areturn

It’s generating slightly redundant bytecode here, in that it’s testing to see if the local variable is a BoxedArray or an int[] before unboxing it despite the fact that it’s always a BoxedArray, but that’s not a big deal. The key point here is that we had a BoxedAnyArray and we end up with an int[].

Here’s an example combining the two instances of polymorphism we’ve seen so far:

def foo = {
val x = newarray[Int]
bar(x, 3);
}

public void foo();
Code:
0:    getstatic    #39; //Field scala/runtime/ScalaRunTime$.MODULE$:Lscala/runtime/ScalaRunTime$;
3:    aload_0
4:    invokevirtual    #43; //Method newarray:()Lscala/runtime/BoxedArray;
7:    getstatic    #49; //Field java/lang/Integer.TYPE:Ljava/lang/Class;
10:    invokevirtual    #53; //Method scala/runtime/ScalaRunTime$.arrayValue:(Lscala/runtime/BoxedArray;Ljava/lang/Class;)Ljava/lang/Object;
13:    astore_2
14:    aload_2
15:    instanceof    #21; //class scala/runtime/BoxedArray
18:    ifeq    37
21:    getstatic    #39; //Field scala/runtime/ScalaRunTime$.MODULE$:Lscala/runtime/ScalaRunTime$;
24:    aload_2
25:    checkcast    #21; //class scala/runtime/BoxedArray
28:    getstatic    #49; //Field java/lang/Integer.TYPE:Ljava/lang/Class;
31:    invokevirtual    #53; //Method scala/runtime/ScalaRunTime$.arrayValue:(Lscala/runtime/BoxedArray;Ljava/lang/Class;)Ljava/lang/Object;
34:    goto    38
37:    aload_2
38:    checkcast    #55; //class "[I"
41:    astore_1
42:    aload_0
43:    new    #57; //class scala/runtime/BoxedIntArray
46:    dup
47:    aload_1
48:    invokespecial    #60; //Method scala/runtime/BoxedIntArray."":([I)V
51:    iconst_3
52:    invokestatic    #66; //Method scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer;
55:    invokevirtual    #68; //Method bar:(Lscala/runtime/BoxedArray;Ljava/lang/Object;)V   58:    return

Note how the BoxedAnyArray gets unboxed and reboxed into a BoxedIntArray before being passed anywhere. This is a good thing – BoxedIntArray is a much lighter weight thing.

Summary:

  • When used monomorphically, Scala arrays are Java arrays.
  • When used polymorphically, Scala arrays are scala.runtime.BoxedArray instances. There is a separate subclass of BoxedArray for each primitive type and one for object. When a monomorphic method receives a BoxedArray it will usually unbox it rather than call its apply and update methods.
  • There is also BoxedAnyArray. This has a fairly high overhead and is used *only* for creating polymorphic arrays. The compiler will get rid of it at the earliest possible opportunity and rebox as other forms.

So, what does this mean in terms of performance?

Firstly, if you use arrays monomorphically, they will be Java arrays and have no additional overhead associated with them (you’ll need to box them for invoking methods on Array like map, projection, etc, but this isn’t too bad). So if you want really high performance code, only use monomorphic arrays.

At the next level, you can still get reasonably high performance code if you use polymorphic arrays but only create them monomorphically. You’ll get a boxing overhead for primitives, but that’s true of any polymorphic code in Scala.

Slowest of all is when you create an array polymorphically and continue to use it polymorphically. Once you’re in monomorphic code it will regain normal performance characteristics and future uses of it will behave as if it had been instantiated monomorphically (but previously stored references to it will continue to pass through a BoxedAnyArray and have the associated overhead).

One final point of warning. Consider the following code:

class Foo[T]{
private[this] val arr = new Array[T](2);
def apply(i : Int) = arr(i);
def update(i : Int, t : T) = arr(i) = t;
}

Note that the Array never escapes this. So there’s no way to unbox it!

Looking at the bytecode confirms:

public class arrays.Foo extends java.lang.Object implements scala.ScalaObject{
public arrays.Foo();
Code:
0:    aload_0
1:    invokespecial    #12; //Method java/lang/Object."":()V
4:    aload_0
5:    new    #14; //class scala/runtime/BoxedAnyArray
8:    dup
9:    iconst_2
10:    invokespecial    #17; //Method scala/runtime/BoxedAnyArray."":(I)V
13:    putfield    #21; //Field arr:Lscala/runtime/BoxedArray;
16:    return

public void update(int, java.lang.Object);
Code:
0:    aload_0
1:    getfield    #21; //Field arr:Lscala/runtime/BoxedArray;
4:    iload_1
5:    aload_2
6:    invokevirtual    #27; //Method scala/runtime/BoxedArray.update:(ILjava/lang/Object;)V
9:    return

public java.lang.Object apply(int);
Code:
0:    aload_0
1:    getfield    #21; //Field arr:Lscala/runtime/BoxedArray;
4:    iload_1
5:    invokevirtual    #38; //Method scala/runtime/BoxedArray.apply:(I)Ljava/lang/Object;
8:    areturn
}

The array never gets unboxed and consequently continues to use the higher overhead BoxedAnyArray forever. If you instead use an Array[AnyRef] here and cast your code will run noticably faster (which was the observation which sparked this whole discussion)

This entry was posted in programming and tagged , , on by .

Random linear algebra hackery in Scala

One of my random pet projects at the moment is putting together a prettified linear algebra API for Scala. I’m not implementing the linear algebra routines myself, but am instead implementing it as a wrapper over Colt. This isn’t really any sort of stable release or project. It’s more of a “If you want this sort of thing you might find it useful” comment.

The basic design is:

  • Everything is mutable, but the API encourages using it in an immutable way.
  • Makes no attempt to hide its colt origins. If you want to use it with Colt proper, it will cheerfully let you do so, and you will often have to explicitly use colt classes.
  • API loosely inspired by matlab/Octave
  • Hideous abuses of overloading, implicits, functional programming and symbolic method names abound in order to get an API that allows for very compact code.  If you don’t like this sort of thing you may want to steer clear. :-)

It’s currently available from my misc hg repository, but it’s getting large enough that I’ll probably factor it out into its own soon.

This entry was posted in programming and tagged , , on by .

SBinary 0.2

I quietly uploaded SBinary 0.2 last night. I was going to write up some documentation for it today and do an official announcement, but it’s now coming up to 19:00 and I still haven’t written any documentation so I have a sneaking suspicion it ain’t happening today. So, if you want to have a play with it there’s a new jar and scaladoc up there.

Changes from 0.1:

  • Improved API which is less closely tied to java.io
  • Supports a wider range of collections and standard types (not the entire standard library as I’d originally said I was going to do. There are some issues surrounding collections and open unions which I want to resolve before I do that).
  • A limited amount of support for lazy IO using Stream. (Specifically, Stream is read lazily from an input. However there are a lot of limitations on this at the moment). The design is such that it should be impossible to incorrectly interleave IO actions (i.e. reading something else from the input will force pending lazy reads to complete).
  • Improved some performance stupidities (nonbuffered reading/writing to files, unspecialised reading/writing of byte arrays)
  • Improved generic combinators for building binary instances (although the “case classes for free” one I want to add is waiting on a compiler bug before I can add it)

I hope to write up some proper documentation soon, and when I do I’ll send an announcement to the mailing list. In the meantime, feel free to have a play.

This entry was posted in programming and tagged , , , on by .