Category Archives: programming

More Asus hilarity

So, Asus have managed to accrue some more black marks this week.

I called on monday to say “Hi, now that you’ve had a look on it could you give me a more useful answer about how long it’s going to take to repair my laptop?”

Their answer: “We can’t find anything wrong with it. Could you send us your power supply?”.

Ok, that’s something at least. I tested on every conceivable combo of power supply and battery, but I suppose it’s possible that the power supply conked out and the battery ate itself as a result and couldn’t recharge from it.

Anyway, I said no, could they just send me the laptop back, I’ll buy a new power supply. (Subtext: These people are so fucking slow that if we get into the sending random parts back and forth game I’ll never get my laptop back). And, incidentally, had they been planning to tell me this at any point?

“Oh, yes, we would have called you today”.

Fuck they would have. Asus and their subsidiaries have not once volunteered information without me having to drag it out of them. Anyway, they agreed to send it back.

Fast forward to today. They managed to score two black marks.

a) They delivered the power supply I purchased. To the wrong address. I very explicitly gave my work address as the delivery one, so they cheerfully delivered it to home instead. ‘Fortunately’ I overslept dramatically (I was at work till 11:30 laat night. :-( ) and was still there when the package arrived.

b) I still don’t have a laptop returned, so I called them up today. After much being on hold, getting randomly hung up on, and general intense annoyingness of their phone system it was confirmed that no they had in fact not made any note whatsoever of an intent to send it back. They claim it will be sent out today and should arrive tomorrow. We’ll see.

At this point I’m almost tempted to just buy a second laptop from Dell even if the new power supply works perfectly. The benefits of never having to deal with these people again are surely worth the price of a laptop…

This entry was posted in programming and tagged , on by .

Minor revelation about Scala existential types

So, I’ve just realised why Scala’s existential types are several orders of magnitude more powerful than Java’s wildcards.

   def swap(xs : Array[T forSome { type T; }]) = xs(0) = x(1); 

The type is not completely unknown, and is persisted across method invocations, so for a given fixed instance you can make use of that to perform operations that would be impossible with wildcards. In particular the following can’t work:

  public void swap(List<?> xs){ 
    xs.set(0, xs.get(1));
  }

This can’t work, because Java has no way of determining that the two wildcards in xs.set and xs.get are the same.

This entry was posted in programming and tagged , , on by .

Dereferencing operators

I’m writing a small library for mutable reference cells. This has spawned a heated debate about what to call the dereferencing operator. Possible options for dereferencing foo are:

One of the big questions is whether it should be postfix or prefix. If it’s postfix, using them as properties becomes much more readable. foo.bar! vs. !(foo.bar). But it also runs into weird precedence issues. On the other hand, the set of characters which can be used in a prefixy manner is really limited and they all seem to have significant meaning.

!foo

Pros: Historical precedent. It’s what ML uses.
Cons: Very easy to confuse with negation. Suppose foo is a reference to a boolean. if (!foo) { } is potentially really confusing.

foo!

Pros: Same as !foo. Less confusing – it’s not currently used by anything major.
Cons: Retains misleading association with negation, although less easy to write confusing code.

foo&

Pros: Historical precedent. Looks almost like C (prefix & isn’t legal).
Cons: Similar confusion to !. & more normally means and. On the other hand, C programmers seem to have gotten used to it.

@foo
Pros: Nice distinctive character. Easy to get used to.
Cons: It isn’t legal Scala (this is kinda a big one :-) ).

~foo
Pros: Same as @. Legal Scala. :-)
Cons: Prefix operator, so doesn’t work well with properties. Somewhat non-obvious.

foo<> (credit to Bob Jones… err. I mean Jan Kriesten for this one)
Pros: Visually distinctive and appealing.
Cons: Looks vaguely directional.

foo^ (credit to Martin Odersky)
Pros: Um. Beats me.
Cons: Confusion with xor. Looks weird.

foo deref

Pros: Fewer weird precedence issues because it’s not an operator. Some people seem to like wordy operator names.
Cons: Visually distracting, overly verbose. Scatters meaningless words throughout the code. Core operations should have nice symbolic notation.
Additional cons: Over my dead body.

foo() (credit to Eric Willigers)

Pros: Interacts much better with precedence rules than any of the others. You can write foo() == “Bar” whereas you’d have to write (foo!) == “Bar”. It seems intuitively obvious what invoking a reference should mean.
Cons: I don’t really have a good argument against this except that it feels wrong. It looks a little weird when you have a reference to a function. e.g. if you had a Ref[() => Unit] it would be potentially easy to write myRef() and think you’d invoked it, when in fact you’d merely returned a function.

Any of the above with an implicit conversion from references to their contents
Pros: The mainline case is syntax free.
Cons: No no no no no no no. This creates *exactly* the sort of confusion between reference cells and their values that I’m trying to avoid, and opens up the possibility of huge classes of subtle bugs where you passed a reference to an object and meant to pass the object. I initially thought it was a good idea, and it has a strong intuitive appeal to it, but I’m convinced it would be disastrous. A slight conciseness advantage in no way offsets the introduction of perniciously evil bugs.

On balance I think foo() is going to win. The precedence issues seem to prohibit the use of any sort of postfix operator. This seems to leave ~foo as the only good alternative, and I think it’s less obviously meaningful and the prefix nature would annoy the properties people.

This entry was posted in programming and tagged , on by .

Why not Scala?

I thought I’d follow up on my previous post on why one would want to use Scala with one on why you wouldn’t. I’m definitely planning to continue using it, but it would be dishonest of me to pretend it was a perfect language.

I’m not going to cover the usual ones – weak tool support, difficulty of hiring Scala programmers, etc. These are pretty standard and will be true in most ‘esoteric’ languages you care to name. They’re certainly important, but not the point of this post. I’m just going to focus on language (and implementation) issues.

You’re looking for a functional language

Scala is not a functional programming language. It has pretensions of being so, and it has adequate support for functional programming, but it only goes so far. It’s got better support for functional programming than C#, Ruby, etc. but if you compare its functional aspects to ML, Haskell, OCaml, etc. you’ll find it sadly lacking. Problems include:

  • Its pattern matching is really rather cumbersome.
  • An annoying distinction between methods and functions. Scala’s first class functions are really no more than a small amount of syntactic sugar around its objects. Because Scala’s scoping is sane this isn’t particularly an issue, but it occasionally shows up.
  • The handling of multiple arguments is annoying. It doesn’t have the pleasant feature of Haskell or ML that every function has a single argument (multiple arguments are encoded as either tuples or via currying). Admittedly this isn’t a prerequisite of a functional language – e.g. Scheme doesn’t do it – but it’s a very big deal in terms of typing and adds a nice consistency to the language. I’m not aware of any statically typed functional languages which *don’t* do this (although the emphasis between tupling and currying varies from language to language).
  • Almost no tail call elimination worth mentioning. A very small subset of tail calls (basically self tail calls – the ones you can obviously turn into loops) are eliminated. This is more the JVM’s fault than Scala’s, but Martin Odersky himself has shown that you can do better (although admittedly it comes with a performance hit).
  • The type inference is embarrassingly weak. e.g. recursive methods won’t have their return type inferred. Even what type inference is there is less than reliable.

Compiler stability

The compiler is buggy. It’s not as buggy as I sometimes get the impression it is – I’ve definitely claimed a few things to be bugs which turned out to be me misunderstanding features – but it’s buggy enough that you’ll definitely run into issues. They’re rarely blockers (although sometimes they are. Jan Kristen has run into a few with his recent experiments with wicket + scala), but more importantly the bugginess means you really can’t trust the compiler as much as you’d like to. When something goes wrong it’s not always certain whether it’s your fault or the compiler’s. This is a big deal when one of the selling points is supposed to be a type system which helps you catch a wide class of errors.

Language consistency

The language has a lot of edge cases. These can be really difficult to wrap your head around, and can be really annoying to remember.

Let’s take an example. Variables. Simple, eh? Well, no.

A variable (local or field) can be a function (or constructor) parameter, a val, or a var. A val is a definition – it can’t be assigned to after the definition is made. A var is a normal mutable variable like in Java. A function parameter is almost like a val, except for the parts where it isn’t. Additionally, a function parameter can also be a var or a val. But it doesn’t have to be. Variables can be call by value (normal), call by name (the expression is evaluated each time you reference its value) or lazy (the expression is evaluated the first time you need its value and never again). But only vals can be lazy. And function parameters can’t be lazy, even if they’re also vals (I don’t understand this one. It seems obviously stupid to me). Meanwhile, only function parameters can be call by name – you can’t assign them to vars or vals (a no argument def is the equivalent of a call by name val).

Clear as mud, eh? Now, granted I wrote the above to make it sound deliberately confusing (it’s probably owed a blog post later to make it seem deceptively simple), but it’s a fairly accurate representation of the state of affairs.

Here’s another one (it’s related to the arguments issue). Consider the following snippet of code:

def foo = "Hello world";
println(foo());

def bar() = "Goodbye world";
println(bar);

Pop quiz: Does this code compile? If not, which bit breaks? No cheating and running it through the compiler!

Answer: No, it doesn’t. Because foo was defined without an argument list, it can’t be invoked as foo(). However, despite bar being defined with an (empty) argument list we can invoke it without one.

I could keep going, but I won’t. The short of it is that there are a lot of these little annoying edge cases. It seems to give beginners to the language a lot of grief.

Too much sugar

Scala has a lot of syntactic sugar. Too much in my opinion. There’s the apply/update sugar, unary operators by prefixing with unary_, general overloaded assignment (which, as I discovered when testing, only works in the presence of an associated def to go with it. Another edge case). Operators ending in : are left associative. Constructors are infixed in pattern matching case classes but not in application. etc. It’s hard to keep track of it all, and most of it is annoyingly superfluous.

Lack of libraries

Yes, yes, I know. It has all of the Java libraries to play with. And this is great. Except… well, they’re Java libraries. They’re designed with a Java mindset, and they can’t take advantage of Scala’s advanced features. Implicit conversions, and a number other tricks, are quite useful for making an API more palatable, but there’s a strong danger that what you end up with isn’t much more than Java with funny syntax. Much more than that requires a reasonable amount of porting work to get a good API for your use.

All in all, I find these add up to just a bunch of annoyances. It’s still my preferred language for the JVM, but depending on how you wait your priorities they might be more significant for you. Even for me I occasionally find myself getting *very* irritated with some of these.

This entry was posted in programming and tagged , , , , on by .

Variance of type parameters in Scala

This is just a quick introduction to one of the features of Scala’s generics. I realised earlier on IRC that they’re probably quite unfamiliar looking to people new to the language, so thought I’d do a quick writeup.

What does the following mean?

  trait Function1[-T1, +R]

It’s saying that the trait Function1 is contravariant in the parameter T1 and covariant in the parameter R.

Err. Eek. Scary words!

Lets try that again.

A Function1[Any, T] is safe to use as a Function1[String, T]. If I can apply f to anything I can certainly apply it to a String. This is contravariance. Similarly, a Function1[T, String] can be quite happily treated as a Function1[String, Any] – if it returns a String, it certainly returns an Any.

So, Foo[+T] means that if S <: T then Foo[S] <: Foo[T]. Foo[-T] means that if S <: T then Foo[T] <: Foo[S] (note the swapped the direction). The default Foo[T], called invariant, is that Foo[S] is not a subtype of Foo[T] unless S == T.

Examples of this sort of behaviour abound. Covariance is more common than contravariance, because immutable collections are almost always covariant in their type parameters. An immutable.List[String] can equally well be treated as an immutable.List[Any] – all the operations are concerned with what values you can get out of the list, so can easily be widened to some supertype.

However, a mutable.List is *not* covariant in its type parameter. You might be familiar with the problems that result from treating it as such from Java. Suppose I have a mutable.List[String], upcast it to a mutable.List[Any] and now do myList += 3. I’ve now added an integer to a list of Strings. Oops! For this reason, mutable objects tend to be invariant in their type parameters.

So, we have three types of type parameter: Covariant, contravariant, invariant. All three crop up and are quite useful.

But there are safe ways to treat mutable objects invariable. Suppose I want someone to pass me an array of Foos, and I have no intention of mutating it. It’s perfectly safe for them to pass me an array of Bars where Bar extends Foo. Can I do this?

Well, this can indeed be done. We could start by doing this:

  def doStuff[T <: Foo](arg : Array[T]) = stuff;

So we introduce a type parameter for the array. Because T will be inferred in most cases, this isn't too painful to use, but it can quickly cause the number of type parameters to explode (and you don't seem to be able to let some type parameters be inferred and some be explicitly provided). Further, we only care about the type parameter in one place. So, let's move it there.

  def doStuff(arg : Array[T forSome { type T <: Foo }]) = stuff;

This uses Scala's existential types to specify that there's an unknown type for which this holds true. This is effectively equivalent to the previous code, but narrows the scope of the type parameter.

The equivalent using Java style wildcards would be:

  def doStuff(arg : Array[? <: Foo ]) = stuff;

But this isn't legal Scala. This is unfortunately a case of Scala being more verbose than the Java equivalent. However, it's not all bad - because of the explicitly named type inside the forSome, you can express more complicated type relationships than wildcards allow for. For example the following:

  def doStuff(arg : Array[T forSome { type T <: Comparable[T]}]) = stuff;

And that's about it for variance in Scala. Hope you found it useful.

This entry was posted in programming and tagged , , on by .