Skip to content

Scala Collections for the Easily Bored Part 1: A Tale of Two Flavors


One of the most obvious things to a Java developer first coming into Scala-land is the radically different Collections API included as part of the standard library.  For the most part, we use the same frameworks and APIs in Scala as are available in Java.  This is natural, thanks to the extremely tight integration between the two languages.  So why is this one area such a startling departure from Scala’s heritage?

The answer has to do with what the Scala language is syntactically capable of handling.  Scala isn’t just an object-oriented language, it is also highly functional.  It is only natural that such an integral part of the core libraries would reflect this fact.  Unfortunately, most developers fail to take full advantage of the power offered by the collections API.  Despite the available power, most code written using Scala’s collections tends to look a lot like Java in disguise.

I had actually planned on addressing this topic in a single article.  However, Scala’s collections are so vast and powerful (sounds like one of Roddenberry’s alien consciousness beings) that it really would overrun the limits of conventional blogging to attempt to cover it all in a single post.  Despite the fact that I’ve been creating mammoth anthologies of late, I think it’s probably better to break it into bite-sized chunks.  First up: the confusing dual nature of Scala’s collections API!

A Tale of Two Flavors

The very first thing developers notice when looking at Scala’s collections is a (seemingly) odd redundancy in the specification.  Looking under the scala.collection package, we see not one, but three separate sub-packages, each containing what seem to be reimplementations of the same types.  For example, consider the following three traits:

I don’t know about you, but this confused the heck out of me the first time I really dug into Scala’s standard API.  Actually, it gets even worse when you discover that there is also a trait (and companion object), scala.collection.Map, which is actually a super-trait of the three listed above.  Seems like Dr. Odersky discovered the magic of separated namespaces and reacted like a two-year-old on espresso.

As it turns out, there’s a very logical reason for having these separated and seemingly-redundant collections packages.  Of the three, one of them can be discounted immediately as uninteresting.  The jcl package contains collections, but they are merely wrappers for the corresponding Java collections.  This allows more efficient transmutation between the Java collections API and Scala.  It is almost never necessary to use this package directly, since a number of implicit conversions are built into Scala to make the process essentially transparent.

Of course, this still leaves the immutable and mutable packages.  This distinction traces back to some of Scala’s functional roots.  As you are likely aware, Scala supports both mutable and immutable variables, as denoted by the var and val keywords, respectively.  While there are some significant differences at compile-time, conceptually, the only distinction between these two types is the former allows reassignment whereas the latter does not.  Mutable variables are a common – and indeed, essential – feature in imperative languages (Java, C++, Ruby, etc).  For example, here’s how we would sum an array of integers in Java:

int[] numbers = ...
int sum = 0;
for (int n : numbers) {
    sum += n;   // reassign sum to accumulate n

In this case, sum is a mutable variable which accumulates the total value of all numbers in the array, numbers.  Theoretical disputes aside, this style of programming is simply impossible in certain languages.  For example, SML provides no mechanism for declaring mutable variables.  So if we want to sum the values in a list of ints, we have to do it in some other way (code provided for the curious):

fun sumList ls = foldl (op +) 0 ls

In Scala, both of these techniques are available to us.  However, despite providing for mutable state, Scala does encourage developers to avoid it.  The reasoning behind this is that mutable state has a tendency to make code more difficult to reason about, making testing much harder.  Also, as any experienced developer will tell you, mutable state kills concurrency.

Not only does Scala encourage the use of immutable variables, it also encourages the use of immutable collections.  This concept may seem a little bizarre to those of you coming from an imperative background (I know it did to me), but it actually works.  While a map container which cannot be modified probably seems a little useless, it actually provides a startling degree of freedom from concerns like concurrent locking and unintended side-effects.  With immutable collections, you are able to manipulate objects with perfect assuredness that no method call will “accidentally” alter its contents.  How many times have you cloned a collection prior to returning it?  Or how often have you dug through someone else’s code just to assure yourself that it is safe to call a given method passing an instance of List?  Immutable data structures completely solve that problem.

Naturally, immutable collections require very different patterns and idioms than those which are mutable.  My ML code from above illustrates this to a very small degree, using a fold to traverse an immutable list.  As a general rule, immutable data structures are only useful when being passed around via method calls.  A common pattern is to build up a data structure recursively, creating a new instance with one more element at each invocation:

def toSet[T](list: List[T]) = {
  def traverse(list: List[T])(set: Set[T]): Set[T] = list match {
    case hd :: tail => traverse(tail)(set + hd)   // create a new Set, adding hd
    case Nil => set
val names = List("Daniel", "Chris", "Joseph", "Renee")
val nameSet = toSet(names)

At each recursive invocation of traverse, a new Set is created, based on the contents of the old one, set, but with one additional member, hd.  At the same time, we are deconstructing an immutable List, list, selecting its first element and whatever remains at each step.  Whenever you work with immutable data structures, you will see a lot of code which looks like this.

Of course, the natural question which comes to mind is, what about performance?  If each invocation actually creates a brand new Set for every recursive call, doesn’t that require a lot of inefficient object copying and heap operations?  Well, as it turns out, this isn’t really the case.  Yes, a new instance must be created at each turn, which is is a comparatively expensive operation on the JVM, but almost nothing is being copied.  All of Scala’s immutable data structures have a property known as persistence, which means that you don’t copy the data out of the old container when creating a new, you just have the new reference the old and treat all of its contents as if they were its own.  A linked list is a good example of this, since each node of a list contains exactly one element, as well as a reference to another node.  If we think of each node as a representative of a list starting with itself and traversing to the end, then list suddenly becomes a fully persistent data structure (since the new list contains a sub-list in its entirety and additive operations require no data copying).  Rich Hickey, the creator of Clojure (a Lisp dialect running on the JVM), has an excellent presentation which explains some of the hows and whys behind this technique (as well as some other interesting topics).  Chapter 19 of Programming in Scala (Odersky, Spoon & Venners) also has a good example of a persistent immutable queue.


I happen to like immutable data, so most of this series uses the scala.collection.immutable package.  However, there are certainly situations where mutable data structures are the only way to go, either because of system requirements or for performance reasons.  Fortunately, Scala’s mutable collections have an almost identical interface to its immutable collections.  Thus, most of the in formation presented here is applicable to both branches of the library.

Now that we have laid a basic foundation regarding the fundamentals of Scala’s collections framework, we can move onto more interesting things.  The next installment will deal with fold and map, the bread and butter of every collection.

Bencode Stream Parsing in Java


It’s surprising how universal XML has become.  It doesn’t seem to matter what the problem, XML is the solution.  For example, consider a simple client/server architecture where the communication protocol must transmit some sort of structured data.  Nine developers out of ten will form the basis of the protocol around XML.  If it’s a lot of data to be transferred, then they will compress the XML using Java’s stream compression libraries.  If there’s binary data to be transmitted, it will either be stored as CDATA within the XML or as files within the same compressed archive.  Very few developers will actually stop and consider alternative solutions.

One such “alternative solution” is bencode (pronounced “bee-encode”).  Similar to formats like XML and JSON, bencode defines a series of constructs which may be used to encode arbitrarily complex data.  However, unlike XML, the design focus of the format was not to produce verbose, human-readable documents, but rather to encode data in the most concise manner possible.  To that end, the core bencode specification only includes four data types, two simple and two composite structures.  These types are defined with an almost complete absence of meta, requiring very little “structure” to clutter the data stream.

Unfortunately, outside of applications like BitTorrent, this elegant binary format has seen remarkably little adoption.  Because of this state of affairs, it can be extremely difficult to find libraries to actually process bencode data.  Not too long ago, I ran into a production use-case which required both parsing and generation of bencode-formatted files.  I considered digging into the source code for Vuze (nee “Azureus”), but a) it seemed like a lot of boring, nearly-wasted effort, and b) I strongly suspect that their bencode parser and generator are extremely space inefficient, since the data sources which they deal with are remarkably small.

The second hang-up was really a more significant motivator than the first, due to the fact that I knew I would be dealing with bencode streams potentially gigabytes in size.  So, rather than fruitlessly dig through someone else’s code, I decided to put all of this formal parser theory to work and roll my own library.  Unless you’re already familiar with bencode, I suggest you read the Wikipedia article to get a feel for the format, otherwise some of what I will be talking about will make no sense at all.  :-)

The first thing I needed to do was build the generation half of the library.  I decided that it would be easier if I avoided trying to use the same backend framework classes with both the generator and the parser.  For example, there are actually two classes in the framework which contain the logic for handling an integer: IntegerValue and IntegerType.  The former is for use in the parser, while the latter is for use in the generator.  This separation of logic may seem a little strange, but it actually simplifies things tremendously.

Remember my primary requirement: extremely efficient implementation of both generator and parser, especially with respect to space.  If I attempted to use the same classes to represent data for both the parser and the generator, then the parser would be forced to read the entire stream into some sort of in-memory representation (think about it; it’s actually true).  Obviously, this is unacceptable for streams that are gigabytes in size, so the traditional “good design” from an object-oriented standpoint was out.

Stream Generation

Since I needed the functionality of bencode stream generation before I needed parsing, I started with that aspect of the framework.  Here again, the most obvious “object-oriented” approach would have been the wrong one.  When we think of generating output in a structured format programmatically, we naturally imagine a DOM-like tree representation (preferably framework-agnostic) which is then walked by the framework to produce the output.  The major disadvantage to this approach is that it requires paging everything into memory.  This works for smaller applications or situations where the data is already in memory, but for my particular use-case, it would have been disastrous.

The only way to avoid paging everything into memory for stream generation is to structure the API so that the data is “pulled” by the generator, rather than “pushed” to it in tree-form.  In other words, the data itself has to be lazy-loaded, using callbacks to grab the data as-needed and hold it in memory only as long as is absolutely necessary.  In a functional language, this would be done with closures (or even normal data types in a pure-functional language).  However, as we all know, Java does not support such time-saving features.  The only recourse is to use abstract classes and interfaces which can be overridden in anonymous inner-classes as well as top-level classes as necessary.


After a bit of experimentation, the finalized hierarchy looks something like this.  Logically, every type must be able to query its abstract method for data of a certain Java type (long for IntegerType, InputStream for StringType, etc), convert this data into bencode with the appropriate meta, and then write the result to a given OutputStream.  Also following our nose, we see the semantic differences between composite and primitive types are really quite limited, especially if we simplify everything to a black box “get data / write encoding” methodology.  In fact, the only thing that CompositeType actually does is enforce the prefix/suffix encoding of every composite type.  Since this is in compliance with the bencode specification, we are safe in extracting this functionality into a superclass.

The more interesting distinction is between so-called “variant” and “invariant” types.  This is where you should begin to notice that I have over-engineered this library to some degree.  If I was just trying to create a pure bencode generator, then I could have skipped InvariantPrimitiveType and VariantPrimitiveType and just let IntegerType and StringType extend PrimitiveType directly.  This comes back to my initial requirements.

Priority one was to create a framework which was blazingly fast, but priority two was to ensure that it was extensible at the type level.  For the particular application I was interested in, I required more than just the core bencode types.  Also on the agenda were proper UTF-8 strings, dates, and support for null.  To accommodate all of this without too much code duplication, I knew I would have to extract a lot of the functionality into generic superclasses.  Hence my somewhat incorrect use of the terms “variant” and “invariant” to describe the difference between the integer type – which is prefix/suffix delimited – and the string type – which defines a length as its prefix and has no closing suffix.

Anyway, back to the problem at hand.  In addition to the CompositeType and PrimitiveType, you should also notice EntryType.  This “extra” type exists to handle the fact that bencode dictionaries are extremely weird and sit rather outside the “common functionality” umbrella of the format in general.  For one thing, the specification requires that dictionary entries be sorted by key, obviously implying some sort of Comparable relation.  Moreover, these keys must be themselves strings, but StringType isn’t comparable because its writeValue(OutputStream) method doesn’t return the data in question, but merely writes it to a given OutputStream.  Are we starting to see the problems with space-efficient implementations?

Enough babble though, let’s see some code!  Here’s how we might encode some very simple data using the generator framework:

public class GeneratorTest {
    public static void main(String[] args) {
        ByteArrayOutputStream os = new ByteArrayOutputStream();
        final byte[] picture = new byte[0];        // presumably something interesting
        DictionaryType root = new DictionaryType() {
            protected void populate(SortedSet<EntryType<?>> entries) {
                entries.add(new EntryType<LiteralStringType>(
                        new LiteralStringType("name"), 
                        new LiteralStringType("Arthur Dent")));
                entries.add(new EntryType<LiteralStringType>(
                        new LiteralStringType("number"), 
                        new IntegerType(42)));
                entries.add(new EntryType<LiteralStringType>(
                        new LiteralStringType("picture"), 
                        new StringType() {
                    protected long getLength() {
                        return picture.length;
                    protected void writeValue(OutputStream os) throws IOException {
                entries.add(new EntryType<LiteralStringType>(
                        new LiteralStringType("planets"), 
                        new ListType() {
                    protected void populate(ListTypeStream list) throws IOException {
                        list.add(new LiteralStringType("Earth"));
                        list.add(new LiteralStringType("Somewhere else"));
                        list.add(new LiteralStringType("Old Earth"));
        try {
        } catch (IOException e) {
        System.out.println(new String(os.toByteArray()));
    private static class LiteralStringType extends StringType 
            implements Comparable<LiteralStringType> {
        private final String value;
        public LiteralStringType(String value) {
            this.value = value;
        protected long getLength() {
            return value.length();
        protected void writeValue(OutputStream os) throws IOException {
        public int compareTo(LiteralStringType o) {
            return o.value.compareTo(value);

It’s hard to imagine why some people claim that Java is a verbose language…

The API may seem a little clumsy, but most of that is caused by the conniptions required to make the generator lazily pull the data, rather than paging it all into memory ahead of time.  Throwing that aside, the rest of the verbosity seems to come from the need for LiteralStringType, rather than just having a StringType which could handle this for us.  The reason for this extra headache is shown in the population of the “picture” field, which presumably may contain several megabytes worth of data from some external source such as a file or database (in this case of course, it doesn’t contain anything, but that’s besides the point).

The result of the above is as follows:

d4:name11:Arthur Dent6:numberi42e7:picture0:7:planetsl5:Earth14:Somewhere else9:Old Earthee

Or, with a little formatting to make it more palatable:

  11:Arthur Dent



    14:Somewhere else
    9:Old Earth

Technically, this is no longer valid bencode, but it is much easier to read this way.

The Parser

With all this bustle surrounding the generator, it’s easy to forget about the inverse process: parsing.  As it turns out, this is both easier and far less elegant than the solution for the generator (I know, it’s a sad state of affairs when the above is considered “elegant”).  Here again, there was a need for the parser to be extremely efficient, especially in terms of memory.  Thus, the logical approach of simply parsing the stream into an in-memory tree doesn’t really work.  Instead, the parser must be a so-called “pull parser”, which only parses each token upon request.  The parser only does exactly what work you ask of it, nothing more.

My initial designs for the parser attempted to follow the example set by the generator: each value type self-contained, responsible for parsing its own format.  As it turns out, this can be difficult to accomplish.  I could have expanded slightly on the parser combinator concept, but monads are very clumsy to achieve in Java, which led me to rule out that option.  In the end, I took a middle ground.

Click for full size

As before, a common superinterface sits above the entire representative hierarchy.  To understand this hierarchy a little better, perhaps it would be helpful to look at the full source for Value:

public interface Value<T> {
    public T resolve() throws IOException;
    public boolean isResolved();

The resolve() method is really the core of the entire parser.  The concept is that each value will be able to consume the bytes necessary to determine its own value, which is converted and returned.  This is extremely convenient because it enables VariantValue(s) (such as string) to carry the logic for parsing to a specific length, rather than the conventional e terminator.  In order to avoid clogging up memory, the return value of resolve() should not be memoized (though, there is nothing in the framework to prevent it).  Conventionally, values which are already resolved should throw an exception if they are resolved a second time.  This prevents the framework from holding onto values which are no longer needed.

You will also notice that CompositeValue not only inherits from Value, but also from the JDK interface, Iterable.  Logically, a composite value is a linear collection of values, consumed one at a time.  To me, that sounds a lot like a unidirectional iterator.  We can, of course, resolve the entire composite at once, mindlessly consuming all of its values, but since all of the values are lost once consumed, the only purpose for such an action would be if we know that we don’t care about a particular composite and we just want to rapidly skip to the next value in the stream.

Returning to primitive values, the resolve() method for IntegerValue is worthy of note, not so much for its uniqueness, but because it is very similar to the parsing technique used in all the other values:

public Long resolve() throws IOException {
    if (resolved) {
        throw new IOException("Value already resolved");
    resolved = true;
    boolean negative = false;
    long value = 0;
    int b = 0;
    while ((b = >= 0) {
        int digit = b - '0';
        if (digit < 0 || digit > 9) {
            if (b == '-') {
                negative = true;
            } else if (b == 'e') {
            } else {
                throw new IOException("Unexpected character in integer value: " 
                    + Character.forDigit(b, 10));
        } else {
            value = (value * 10) + digit;
    if (negative) {
        value *= -1;
    return value;

The i prefix itself is consumed before control flow even enters this method.  This is because the prefix is required to determine the appropriate value implementation to use.  Specifically, the logic to perform this determination is contained within the Parser class, which maintains a map of Value(s) and their associated prefixes.  String values have special logic associated with them, as they do not have a prefix.

As with most hand-coded parsers, this one operates on the principle of “eat until it hurts”.  We start out by assuming that the integer value extends to the end of the stream, then we set about to find a premature end to the integer, at which point we break out and call it a day.  Since we are moving from left to right through a base-10 integer, we must multiply the current accumulator by 10 prior to adding the new digit. 

Actually, the real heart of the parser framework is CompositeValue.  This class is inherited by Parser to define a special value encompassing the stream itself (which is viewed as a composite value with no delimiters and only a single child).  This unification allows us to keep the code for parsing a composite stream in a single location.  This implementation is a little less concise than the code for parsing an integer, but it follows the same pattern and is fairly instructive:

protected final Value<?> parse() throws IOException {
    if (resolved) {
        throw new IOException("Composite value already resolved");
    if (previous != null) {
        if (!previous.isResolved()) {
            previous.resolve();        // ensure we're at the right spot in the stream
    byte b = -1;
    if (readAhead instanceof Some) {
        b = readAhead.value();
        readAhead = new None<Byte>();
    } else {
        b = read();
    if (b >= 0) {
        Class<? extends Value<?>> valueType = parser.getValueType(b);
        if (valueType != null) {
            return previous = Parser.createValue(valueType, parser, is);
        } else if (b > '0' && b <= '9') {
            return previous = readString(b - '0');
        } else if (b == ' ' || b == '\n' || b == '\r' || b == '\t') {
            return parse();        // loop state
        } else {
            throw new IOException("Unexpected character in the parse stream: " 
                + Character.forDigit(b, 10));
    throw new IOException("Unexpected end of stream in composite value");
private final StringValue readString(long length) throws IOException {
    int i =;
    if (i >= 0) {
        byte b = (byte) i;
        if (b == ':') {
            return Parser.createValue(StringValue.class, parser, 
                new SubStream(is, length));
        } else if (b >= '0' && b <= '9') {
            return readString((length * 10) + b - '0');
        } else {
            throw new IOException("Unexpected character in string value: " 
                + Character.forDigit(i, 10));
    throw new IOException("Unexpected end of stream in string value");

It seems a bit imposing, but really this code is more of the same logic we saw previously when dealing with integers.  The only value type which really gives us trouble here is string.  We can’t simply treat it like the others because it has no prefix.  For this reason, we must assume that any unbound integer is an inclusive prefix for a string.  In most parser implementations, this would require backtracking, but because we are doing this by hand, we can condense the backtrack into an inherited parameter (borrowing terminology from attribute grammars), avoiding the performance hit.

There’s one final bit of weirdness which deserves attention before we bail on this small epic: dictionary values.  Intuitively, a dictionary value should be parsed into a Java Map, or some sort of associative data structure.  Unfortunately, a map is by definition a random access data structure.  Since we are dealing with a sequential bencode stream, the only recourse to satisfy this property would be to page the entire dictionary into memory.  This of course violates one of the primary requirements which is to avoid using more memory than necessary.

The solution I eventually chose to this problem was to limit dictionary access to sequential, which translates into alphabetical given the nature of bencode dictionaries.  Thus, a dictionary can be parsed in the same way as a list, where each element is a sequential key and value, jointly represented by EntryValue.  To make usage patterns slightly easier, EntryValue memoizes the key and value.  Due to the fact that both of these objects are themselves Value(s), this does not lead to inadvertent memory bloat.


Hopefully the parser and generator presented here will be of some utility in situations where you have to parse large volumes of bencoded data.  The API is (admittedly) bizarre and difficult to deal with, but the performance results are difficult to deny.  This framework is currently deployed in production, where benchmarks have shown that it imposes little-to-no runtime overhead, and practically zero memory overhead (despite the sizeable amounts of data being processed).

For convenience, I actually created a Google Code project for this framework so as to facilitate its development internally to the project I was working on.  The end result of this is unlike most of my experiments, there is actually a proper SVN from which the source may be obtained!  A packaged JAR may be obtained from the downloads section.

Implementing Groovy’s Elvis Operator in Scala


Groovy has an interesting shortening of the ternary operator that it rather fancifully titles “the Elvis Operator“.  This operator is hardly unique to Groovy – C# has had it since 2.0 in the form of the Null Coalescing Operator – but that doesn’t mean that it is not a language feature worth learning from.  Surprisingly (for a C-derivative language), Scala entirely lacks any sort of ternary operator.  However, the language syntax is more than flexible enough to implement something similar without ever having to dip into the compiler.

But before we go there, it is worth examining what this operator does and how it works in languages which already have it.  In essence, it is just a bit of syntax sugar, allowing you to easily check if a value is null and provide a value in the case that it is.  For example:

firstName = "Daniel"
lastName = null
println firstName ?: "Chris"
println lastName ?: "Spiewak"

This profound snippet really demonstrates about all there is to the Elvis operator.  The result is as follows:


Not terribly exciting.  Essentially, what we have is a binary operator which evaluates the left expression and tests to see if it is null.  In the case of firstName, this is false, so the right expression (in this case, "Chris") is never evaluated.  However, lastName is null, which means that we have to evaluate the right expression and return its value, rather than null.  It’s all just so much syntax sugar that can be expressed equivalently in any language with a conditional operator (in this case, Java):

String firstName = "Daniel";
String lastName = null;
System.out.println((firstName == null) ? "Chris" : firstName);
System.out.println((lastName == null) ? "Spiewak" : lastName);

A bit verbose, don’t you think?  Of course, this isn’t really a fair comparison, since Groovy is a far more concise language than Java.  Let’s see how the above would render in a real man’s language like Scala:

val firstName = "Daniel"
val lastName: String = null
println(if (firstName == null) "Chris" else firstName)
println(if (lastName == null) "Spiewak" else lastName)

Better, but still a little clumsy.  The truth of the matter is that we’re forced to do this sort of null checking all the time (well, maybe a little less in Scala) and the constructs for doing so are woefully inadequate.  Thus, the motivation for the Elvis operator.

Getting Things Started

Like all good programmers should, we’re going to start with a runnable specification for every behavior desired from the operator.  I’ve written before about the excellent Specs framework, so that’s what we’ll use:

"elvis operator" should {
  "use predicate when not null" in {
    "success" ?: "failure" mustEqual "success"
  "use alternative when null" in {
    val test: String = null
    test ?: "success" mustEqual "success"
  "type correctly" in {		// if it compiles, then we're fine
    val str: String = "success" ?: "failure"
    val i: Int = 123 ?: 321
    str mustEqual "success"
    i mustEqual 123
  "infer join of types" in {    // must compile
    val res: CharSequence = "success" ?: new java.lang.StringBuilder("failure")	
    res mustEqual "success"
  "only eval alternative when null" in {
    var a = "success"
    def alt = {
      a = "failure"
    "non-null" ?: alt
    a mustEqual "success"

Fairly straightforward stuff.  I imagine that this specification for the operator is a bit more involved than the one used in the Groovy compiler, due to the fact that Scala is a statically typed language and thus requires a bit more effort to ensure that everything is working properly.  From this specification, we can infer three core properties of the operator:

  1. Basic behavior when null/not-null
  2. The result type should be the unification of the static types of the left and right operands
  3. The right operand should only be evaluated when the left is null

The first property is fairly easy to understand; it is intuitive in the definition of the operator.  All this means is that the value of the operator expression is dependent on the value of the left operand.  When not null, the expression value is equal to the value of the left operand.  If the left operand is null, then the expression is valued equivalent to the right operand.  This is just formally expressing what we spent the first section of the article describing.

Ignoring the second and third properties, we can actually attempt an implementation.  For the moment, we will just assume that the left and right operands must be of exactly the same type, otherwise the operator will be inapplicable.  So, without further ado, implementation enters stage right:

implicit def elvisOperator[T](alt: T) = new {
  def ?:(pred: T) = if (pred == null) alt else pred

Notice the use of the anonymous inner class to carry the actual operator?  This is a fairly common trick in Scala to avoid the definition of a full-blown class just for the sake of adding a method to an existing type.  To break down what’s going on here, we have defined an implicit type conversion from any type T to our anonymous inner class.  This conversion will be inserted by the compiler whenever we invoke the ?: operator on an expression.

Sharp-eyed developers will notice something a little odd about the way this code is structured.  In fact, if you look closely, it seems that we evaluate the right operand and use its value if non-null (otherwise left), which is exactly the opposite of what our specification defines.  For a normal operator, this observation would be quite correct.  However, Scala defines the associatively of operators based on the trailing symbol.  In this case, because our trailing symbol is a colon (:), the operator itself will be right-associative.  Thus, the following expression:

check ?: alternate

…is transformed by the compiler into the following:


This is how right-associative operators function, by performing method calls on the right operand.  Thus, we need to define our implicit conversion such that the ?: method will be defined for the right operand, taking the left operand as a parameter.  We’ll see a bit later on how this can cause trouble, but for now, let’s continue with the specification.

A Little Type Theory

The second property is a little tougher.  Type unification is one of those pesky issues that plague statically typed languages and are simply irrelevant in those with dynamic type systems.  The issue arises from the following question: what happens if the left and right operands are of different types?  In Groovy, this is a non-issue because the value of the expression is simply dynamically typed according to the runtime type of the operand which is chosen.  However, Scala requires static type information, which means that we need to ensure that the static type of the expression is sound for either the left or the right operand (since Scala does not have non-nullable types).  The best way to do this is to compute the least upper bound of the two types, an operation which is also known as minimal unification.  Consider the following hierarchy:


Now imagine that the left operand is of static type Apple, while the right operand is of static type Pear.  We need to find a static type which is safe for both of these.  Intuitively, this type would be Fruit, since it is a common superclass of both Apple and Pear.  Regardless of which expression is chosen at runtime, we will be able to polymorphically treat the value as a value of type Fruit.  The intuition in this case is quite correct.  In fact, it actually has a rigorous mathematical proof…which I won’t go into.  (queue sighs of relief)

One additional example should serve to really drive the point home.  Consider the scenario where the left operand has type Vegitable and the right operand has type Apple.  This is a bit trickier, but it recursively boils down to the same case.  The only common superclass between these two types is Object, due to the fact that the hierarchies are disjoint.

This operation is fairly easy to perform by hand given the full type hierarchy.  For that matter, it isn’t very difficult to write an algorithm which can efficiently compute the minimal unification of two types.  Unfortunately, we don’t have that luxury here.  We cannot simply write code which is executed at compile time to determine type information, we must make use of the existing Scala type system in order to “trick” the compiler into inferring things for us.  We do this by making use of lower-bounds on type parameters.  With this in mind, we can (finally) make a first attempt at a well-typed implementation of the operator:

implicit def elvisOperator[T](alt: T) = new {
  def ?:[A >: T](pred: A) = if (pred == null) alt else pred

The only thing we have changed is the type of the pred variable from T to a new type parameter, A.  This new type parameter is defined by the lower-bound T.  Translated into English, the type expression reads something like the following:

Accept parameter pred of some type A which is a super-type of T.

The real magic of the expression is that pred need not be exactly of type A; it could also be a subtype.  Thus, A is some generic supertype which encompasses both the types of the left and the right operands.

Fancy Parameter Types

This allows us to move onto the third property: only evaluate the right operand if the left is null.  This is the normal behavior for conditional expressions.  After all, you wouldn’t want your code performing an expensive operation (such as grabbing data from a server somewhere) just to throw away the result because a different branch of the conditional was chosen.  Actually, the bigger issue with ignoring this property (as we have done so far) is that the right operand may actually have side-effects.  Scala isn’t a pure functional language, so evaluating expressions that we don’t need (or worse, that the developer isn’t expecting) can have extremely dire consequences.

Unfortunately, at first glance, there doesn’t really seem to be a way to avoid this evaluation.  After all, we need to invoke the ?: method on something.  We could try using a left-associative operator instead (such as C#’s ?? operator), but even that wouldn’t fully solve the problem as we would still need to pass the right operand as a parameter.  In short, it seems like we’re stuck.

The good news is that Scala’s designers chose to adopt an age-old construct known as “pass-by-name parameters”.  This technique dates all the way back to ALGOL (possibly even further).  In fact, it’s so old and obscure that I’ve actually had professors tell me that it has been completely abandoned in favor of the more conventional pass-by-value (what Java, C#, Scala and most languages use) and pass-by-reference (which is available in C++).  Pass-by-name parameters are very much like normal parameters in that they are used to copy values from a calling scope into the method in question.  However, unlike normal parameters, they are evaluated on an as-needed basis.  This means that a pass-by-name parameter will only be evaluated if its value is required within the method called.  For example:

def doSomething(a: =>Int) = 1 + 2
def createInteger() = {
  println("Made integer")
println("In the beginning...")
println(" the end")

Counter to our first intuition, this will print the following:

In the beginning... the end

In other words, the createInteger method is never called!  This is because the value of the pass-by-name parameter in the doSomething method is never accessed, meaning that the value of the expression is not needed.  The a parameter is denoted pass-by-name by the => notation (just in case you were wondering).  We can apply this to our implementation by changing the parameter of the implicit conversion from pass-by-value to pass-by-name:

implicit def elvisOperator[T](alt: =>T) = new {
  def ?:[A >: T](pred: A) = if (pred == null) alt else pred

The language-level implementation of the if/else conditional expression will ensure that the alt parameter is only accessed iff the value of pred is null, meaning we have finally satisfied all three properties.  We can check this by compiling and running our specification from earlier:

Specification "TernarySpecs"
  elvis operator should
  + use predicate when not null
  + use alternative when null
  + type correctly
  + infer join of types
  + only eval alternative when null

Total for specification "TernarySpecs":
Finished in 0 second, 78 ms
5 examples, 6 assertions, 0 failure, 0 error


We now have a working implementation of Groovy’s Elvis operator within Scala and we never had to move beyond simple API design.  Truly, one of Scala’s greatest strengths is its ability to expression extremely complex constructs within the confines of the language.  This makes it uniquely well-suited to hosting internal domain-specific languages.  Using techniques similar to the ones I have outlined in this article, it is possible to define operations which would require compiler-level implementation in most languages.

The full source (such as it is) for the Elvis operator in Scala is available for download, along with a bonus implementation of C#’s ?? syntax (just in case you prefer it).  The implementation differs slightly due to the fact that ?? is a left-associative operator, but the single-use (unchained) semantics are identical.  Enjoy!

Pipe Dream: Static Analysis for Ruby


Yes, yes I know: Ruby is a dynamic language.  The word “static” is literally opposed to everything the language stands for.  With that said, I think that even Ruby development environments would benefit from some simple static analysis, just enough to catch the really idiotic errors.

Here’s the crux of the problem: people don’t test well.  Even with nice, behavior-driven development as facilitated by frameworks like RSpec, very few developers sufficiently test their code.  This isn’t just a problem with dynamic languages either, no one is safe from the test disability.  In some ways, it’s a product of laziness, but I think in most cases, good developers just don’t want to work on mundane problems.  It’s boring having to write unit test after unit test, checking and re-checking the same snippet of code with different input.

In some sense, it is this problem that compilers and static type systems try to avert, at least partially.  The very purpose of a static type system is to be able to prove certain things about your code simply by analysis.  By enabling the compiler to say things using the type system, the language is providing a safety net which filters out ninety percent of the annoying “no-brainer” mistakes.  A simple example would be invoking a method with the wrong parameters; or worse yet, misspelling the name of the method or type altogether.

The problem is that there are some problems which are more simply expressed in ways which are not provably sound.  In static languages, we get around this by casting, but such techniques are ugly and obviously contrived.  It is this problem which has given rise to the kingdom of dynamic languages; it is for this reason that most scripting languages have dynamic type systems: simple expression of algorithm without worrying about provability.  In fact, there are so many problems which do not fit nicely within most type systems that many developers have chosen to eschew static languages altogether, claiming that static typing just gets in the way.

Unfortunately, by abandoning static types, these languages lose that typo safety net.  It’s too easy to make a trivial mistake in a dynamic language, buried somewhere deep in the bowls of your application.  This mistake could easily be averted by a compiler with validating semantic analysis, but in a dynamic language, such a mistake could go unnoticed, conceivably even making it into production.  For this reason, most dynamic language proponents are also strong advocates of solid, comprehensive testing.  They have to be, for without such testing, one should never trust dynamic code in a production system (or any code, for that matter, but especially the unchecked dynamic variety).

Most large, production systems written in languages like Ruby or Groovy have large test suites which sometimes take hours to run.  These suites are extremely fine-grained, optimally checking every line of code with every possible kind of input, so as to be sure that mistakes are caught.  This is where the flexibility of dynamic typing really comes back to haunt you: extra testing is required to ensure that silly mistakes don’t slip through.  The irony is that a lot of developers using dynamic languages do so to get away from the “nuisance” of compilation, when all they have done is trade one inconvenience for another (testing).

Given this situation, it’s not unreasonable to conclude that what dynamic languages really need is a tool which can look through code and find all of those brain-dead mistakes.  Such a tool could be run along with the normal test suite, finding and reporting errors in much the same way.  It wouldn’t really have to be a compiler, so the tool wouldn’t slow down the development process, it would just be an effective layer of automated white-box testing.

But how could such a thing be accomplished in a language like Ruby?  After all, it is a truly dynamic language.  Methods don’t even exist until runtime, and sometimes only if certain code paths are run.  Types are completely undeclared, and every object can potentially respond to any method.  The answer is to perform extremely permissive inference.

It was actually a recent post by James Ervin on the nomenclature of type systems which got me thinking along these lines.  It should be possible by static analysis to infer the structural type of any value based on its usage.  Consider:

def do_something(obj)
  if obj.to_i == 0
    other = obj.find :name => 'Daniel'

Just by examining this code, we can say certain things about the types involved.  For instance, we know that obj must respond to the following methods:

  • to_i
  • [Symbol]
  • find(Hash)

In turn, we know that the find(Hash) method must return a value which defines to_s.  Of course, this last bit of information isn’t very useful, because every object defines that method, but it’s still worth the inference.  The really useful inference which comes out of to_s is the knowledge that this method sometimes returns a value of type String (making the assumption that to_s hasn’t been redefined to return a different type, which isn’t exactly a safe assumption).  At other times, do_something will return whatever value comes from the square bracket operator ([]) on obj.  This bit of information we must remember in the analysis.  We can’t just assume that this method will return a String all the time, even if to_s does because method return types need not be homogeneous in dynamic languages.

Now, at this point we have effectively built up a structural type which is accepted by do_something.  Literally, we have formalized in the analysis what our intuition has already told us about the method.  There are some gaps, but that is to be expected.  The key to this analysis is not attempting to be comprehensive.  Dynamic languages cannot be analyzed as if they were static, one must expect to have certain limitations.  In such situations where the analysis is insufficient, it must assume that the code is valid, otherwise there will be thousands of false positives in the error checking.

So what is it all good for?  Well, imagine that somewhere else in our application, we have the following bit of code:

do_something 42

This is something we know will fail, because we have a simple value (42) which has a nominal type we can easily infer.  A little bit of checking on this type reveals the fact that it does not define square brackets, nor does it define a find(Hash) method.  This finding could be reported as an error by the analysis engine.

Granted, we still have to account for the fact that Ruby has things like method_missing and open classes, but all of this can fall into the fuzzy area of the analysis.  In situations where it might be alright to pass an object which does not satisfy a certain aspect of the structural type, the analysis must let it pass without question.

You can imagine how this analysis could traverse the entire source tree, making the strictest inferences it can and allowing for dynamic fuzziness where applicable.  Since the full sources of every Ruby function, class and module are available at runtime, analysis could be performed without any undue concern regarding obfuscation or parsing of binaries.  Conceivably, most trivial errors could be caught without any tests being written, taking some of the burden off of the developer.  There is a slight concern that developers would build up a false sense of security regarding their testing (or lack thereof), but I think we just have to trust that won’t happen, or won’t last long if it does.

Most advanced Ruby toolsets already have an analysis somewhat similar to the one I outlined.  NetBeans Ruby for example has some fairly advanced nominal type inference to allow things like semantic highlighting and content assist.  But as far as I know, this type inference is only nominal, and fairly local at that.  The structural type inference that I am proposing could conceivably provide far better assurances and capabilities than mere nominal inference, especially if enhanced through successive iteration and a more “global” approach (similar to Hindley/Milner in static languages).

One thing is certain, it isn’t working to just rely on developers being conscientious with their testing.  With the rapid rise in production systems running on dynamic languages, it is in all of our best interests to try to find a way to make these systems more stable and reliable.  The best way to do this is to start with code assurance and try to make it a little less painful to catch mistakes before deployment.

The Need for a Common Compiler Framework


In recent years, we have seen a dramatic rise in the number of languages used in mainstream projects.  In particular, languages which run on the JVM or CLR have become quite popular (probably because sane people hate dealing with x86 assembly).  Naturally, such languages prefer to interoperate with other languages built on these core platforms, particularly Java and C# (respectively).  Collectively, years of effort have been put into devising and implementing better ways of working with libraries written in these “parent languages”.  The problem is that such efforts are crippled by one fundamental limitation: circular dependencies.

Let’s take Scala as an example.  Of all of the JVM languages, this one probably has the potential for the tightest integration with Java.  Even Groovy, which is renowned for its integration, still falls short in many key areas.  (generics, anyone?)  With Scala, every class is a Java class, every method is a Java method, and there is no API which cannot be accessed from Java as natively as any other.  For example, I can write a simple linked list implementation in Scala and then use it in Java without any fuss whatsoever (warning: untested sample):

class LinkedList[T] {
  private var root: Node = _
  def add(data: T) = {
    val insert = Node(data, null)
    if (root == null) {
      root = insert
    } else { = insert
  def get(index: Int) = {
    def walk(node: Node, current: Int): T = {
      if (node == null) {
        throw new IndexOutOfBoundsException(index.toString)
      if (current < index) {
        walk(, current + 1)
      } else {
    if (index < 0) {
      throw new IndexOutOfBoundsException(index.toString)
    walk(root, 0)
  def size = {
    def walk(node: Node): Int = if (node == null) 0 else 1 + walk(
  private case class Node(data: T, var next: Node)

Once this class is compiled, we can use it in our Java code just as if it were written within the language itself:

public class Driver {
    public static void main(String[] args) {
        LinkedList<String> list = new LinkedList<String>();
        for (String arg : args) {
        System.out.println("List has size: " + list.size());
        for (int i = 0; i < list.size(); i++) {

Impressively seamless interoperability!  We actually could have gotten really fancy and thrown in some operator overloading.  Obviously, Java wouldn’t have been able to use the operators themselves, but it still would have been able to call them just like normal Java instance methods.  Using Scala in this way, we can get all the advantages of its concise syntax and slick design without really abandoning our Java code base.

The problem comes in when we try to satisfy more complex cases.  Groovy proponents often trot out the example of a Java class inherited by a Groovy class which is in turn inherited by another Java class.  In Scala, that would be doing something like this:

public class Shape {
    public abstract void draw(Canvas c);
class Rectangle(val width: Int, val height: Int) extends Shape {
  override def draw(c: Canvas) {
    // ...
public class Square extends Rectangle {
    public Square(int size) {
        super(size, size);

Unfortunately, this isn’t exactly possible in Scala.  Well, I take that back.  We can cheat a bit and first compile Shape using javac, then compile Rectangle using scalac and finally Square using javac, but that would be quite nasty indeed.  What’s worse is such a technique would completely fall over if the Canvas class were to have a dependency on Rectangle, something which isn’t too hard to imagine.  In short, Scala is bound by the limitations of a separate compiler, as are most languages on the JVM.

Groovy solves this problem by building their own Java compiler into groovyc, thus allowing the compilation of both Java and Groovy sources within the same process.  This solves the problem of circular references because neither set of sources is completely compiled before the other.  It’s a nice solution, and one which Scala will be adopting in an upcoming release of its compiler.  However, it doesn’t really solve everything.

Consider a more complex scenario.  Imagine we have Java class Shape, which is extended by Scala class Rectangle and Groovy class Circle.  Imagine also that class Canvas has a dependency on both Rectangle and Circle, perhaps for some special graphics optimizations.  Suddenly we have a three-way circular dependency and no way of resolving it without a compiler which can handle all three languages: Java, Groovy and Scala.  This is starting to become a bit more interesting.

Of course, we can solve this problem in the same way we solved the Groovy-Java dependence problem: just add support to the compiler!  Unfortunately, it may have been trivial to implement a Java compiler as part of groovyc, but Scala is a much more difficult language from a compiler’s point of view.  But even supposing that we do create an integrated Scala compiler, we still haven’t solved the problem.  It’s not difficult to imagine throwing another language into the mix; Clojure, for example.  Do we keep going, tacking languages onto our once-Groovy compiler until we support everything usable on the JVM?  It should be obvious why this is a bad plan.

A more viable solution would be to create a common compiler framework, one which would be used as the basis for all JVM languages.  This framework would have common abstractions for things like name resolution and type checking.  Instead of creating an entire compiler from scratch, every language would simply extend this core framework and implement their own language as some sort of module.  In this way, it would be easy to build up a custom set of modules which solve the needs of your project.  Since the compilers are modular and based on the same core framework, they would be able to handle simultaneous compilation of all JVM languages involved, effectively solving the circular dependency problem in a generalized fashion.

The framework could even make things easier on would-be compiler implementors by handling common operations like bytecode emission.  Fundamentally, all of these tightly-integrated languages are just different front-ends to a common backend: the JVM.  I haven’t looked at the sources, but I would imagine that there is a lot of work which had to be done in each compiler to solve problems which were already handled in another.

Of course, all this is purely speculative.  Everyone builds their compiler in a slightly different way (slightly => radically in the case of languages like Scala) and I wouldn’t imagine that it would be easy to build this sort of common compiler backend.  However, the technology is in place.  We already have nice module systems like OSGi, and we’re certainly no strangers to the work involved in building up a proper CLASSPATH for a given project.  Why should this be any different?

It’s not without precedent either.  GCC defines a common backend for a number of compilers, such as G++, GCJ and even an Objective-C compiler.  Granted, it’s neither as high-level nor as modular as we would need to solve circular dependencies, but it’s something to go on.

It will be interesting to see where the JVM language sphere is headed next.  The rapid emergence of so many new languages is leading to problems which will have to be addressed before the polyglot methodology will be truly accepted by the industry.  Some of the smartest people in the development community are working toward solutions; and whether they take my idea of a modular framework or not, somewhere along the line the problem of simultaneous compilation must be solved.