Skip to content

The Magic Behind Parser Combinators

24
Mar
2009

If you’re like me, one of the first things that attracted you to Scala was its parser combinators.  Well, maybe that wasn’t the first thing for me, but it was pretty far up there.  Parser combinators make it almost too easy to create a parser for a complex language without ever leaving the comfortable play-pen afforded by Scala.  Incidentally, if you aren’t familiar with the fundamentals of text parsing, context-free grammars and/or parser generators, then you might want to do some reading before you continue with this article.

Intro to Parser Combinators

In most languages, the process of creating a text parser is usually an arduous and clumsy affair involving a parser generator (such as ANTLR, JavaCC, Beaver or <shamelessPlug>ScalaBison</shamelessPlug>) and (usually) a lexer generator such as JFlex.  These tools do a very good job of generating sources for efficient and powerful parsers, but they aren’t exactly the easiest tools to use.  They generally have a very steep learning curve, and due to their unique status as compiler-compilers, an unintuitive architecture.  Additionally, these tools can be somewhat rigid, making it very difficult to implement unique or experimental features.  For this reason alone, many real world compilers and interpreters (such as javac, ruby, jruby and scalac) actually use hand-written parsers.  These are usually easier to tweak, but they can be very difficult to develop and test.  Additionally, hand-written parsers have a tendency toward poor performance (think: the Scala compiler).

When creating a compiler in Scala, it is perfectly acceptable to make use of these conventional Java-generating tools like ANTLR or Beaver, but we do have other options.  Parser combinators are a domain-specific language baked into the standard library.  Using this internal DSL, we can create an instance of a parser for a given grammar using Scala methods and fields.  What’s more, we can do this in a very declarative fashion.  Thanks to the magic of DSLs, our sources will actually look like a plain-Jane context-free grammar for our language.  This means that we get most of the benefits of a hand-written parser without losing the maintainability afforded by parser generators like bison.  For example, here is a very simple grammar for a simplified Scala-like language, expressed in terms of parser combinators:

object SimpleScala extends RegexpParsers {
 
  val ID = """[a-zA-Z]([a-zA-Z0-9]|_[a-zA-Z0-9])*"""r
 
  val NUM = """[1-9][0-9]*"""r
 
  def program = clazz*
 
  def classPrefix = "class" ~ ID ~ "(" ~ formals ~ ")"
 
  def classExt = "extends" ~ ID ~ "(" ~ actuals ~ ")"
 
  def clazz = classPrefix ~ opt(classExt) ~ "{" ~ (member*) ~ "}"
 
  def formals = repsep(ID ~ ":" ~ ID, ",")
 
  def actuals = expr*
 
  def member = (
      "val" ~ ID ~ ":" ~ ID ~ "=" ~ expr
    | "var" ~ ID ~ ":" ~ ID ~ "=" ~ expr
    | "def" ~ ID ~ "(" ~ formals ~ ")" ~ ":" ~ ID ~ "=" ~ expr
    | "def" ~ ID ~ ":" ~ ID ~ "=" ~ expr
    | "type" ~ ID ~ "=" ~ ID
  )
 
  def expr: Parser[Expr] = factor ~ (
      "+" ~ factor
    | "-" ~ factor
  )*
 
  def factor = term ~ ("." ~ ID ~ "(" ~ actuals ~ ")")*
 
  def term = (
      "(" ~ expr ~ ")"
    | ID
    | NUM
  )
}

This is all valid and correct Scala.  The program method returns an instance of Parser[List[Class_]], assuming that Class_ is the AST class representing a syntactic class in the language (and assuming that we had added all of the boiler-plate necessary AST generation).  This Parser instance can then be used to process a java.io.Reader, producing some result if the input is valid, otherwise producing an error.

How the Magic Works

The really significant thing to notice here is that program is nothing special; just another method which returns an instance of class Parser.  In fact, all of these methods return instances of Parser.  Once you realize this, the magic behind all of this becomes quite a bit more obvious.  However, to really figure it all out, we’re going to need to take a few steps back.

Conceptually, a Parser represents a very simple idea:

Parsers are invoked upon an input stream.  They will consume a certain number of tokens and then return a result along with the truncated stream.  Alternatively, they will fail, producing an error message.

Every Parser instance complies with this description.  To be more concrete, consider the keyword parser (what I like to call the “literal” parser) which consumes a single well-defined token.  For example (note that the keyword method is implicit in Scala’s implementation of parser combinators, which is why it doesn’t appear in the long example above):

def boring = keyword("bore")

The boring method returns a value of type Parser[String].  That is, a parser which consumes input and somehow produces a String as a result (along with the truncated stream).  This parser will either parse the characters b-o-r-e in that order, or it will fail.  If it succeeds, it will return the string "bore" as a result along with a stream which is shortened by four characters.  If it fails, it will return an error message along the lines of “Expected 'bore', got 'Boer'“, or something to that effect.

By itself, such a parser is really not very useful.  After all, it’s easy enough to perform a little bit of String equality testing when looking for a well-defined token.  The real power of parser combinators is in what happens when you start combining them together (hence the name).  A few literal parsers combined in sequence can give us a phrase in our grammar, and a few of these sequences combined in a disjunction can give us the full power of a non-terminal with multiple productions.  As it turns out, all we need is the literal parser (keyword) combined with these two types of combinators to express any LL(*) grammar.

Before we get into the combinators themselves, let’s build a framework.  We will define Parser[A] as a function from Stream[Character] to Result[A], where Result[A] has two implementations: Success[A] and Failure.  The framework looks like the following:

trait Parser[+A] extends (Stream[Character]=>Result[A])
 
sealed trait Result[+A]
 
case class Success[+A](value: A, rem: Stream[Character]) extends Result[A]
 
case class Failure(msg: String) extends Result[Nothing]

Additionally, we must add a concrete parser, keyword, to handle our literals.  For the sake of syntactic compatibility with Scala’s parser combinators, this parser will be defined within the RegexpParsers singleton object (despite the fact that we don’t really support regular expressions):

object RegexpParsers {
  implicit def keyword(str: String) = new Parser[String] {
    def apply(s: Stream[Character]) = {
      val trunc = s take str.length
      lazy val errorMessage = "Expected '%s' got '%s'".format(str, trunc.mkString)
 
      if (trunc lengthCompare str.length != 0) 
        Failure(errorMessage)
      else {
        val succ = trunc.zipWithIndex forall {
          case (c, i) => c == str(i)
        }
 
        if (succ) Success(str, s drop str.length)
        else Failure(errorMessage)
      }
    }
  }
}

For those of you who are still a little uncomfortable with the more obscure higher-order utility methods in the Scala collections framework: don’t worry about it.  While the above may be a bit obfuscated, there isn’t really a need to understand what’s going on at any sort of precise level.  The important point is that this Parser defines an apply method which compares str to an equally-lengthed prefix from s, the input character Stream.  At the end of the day, it returns either Success or Failure.

The Sequential Combinator

The first of the two combinators we need to look at is the sequence combinator.  Conceptually, this combinator takes two parsers and produces a new parser which matches the first and the second in order.  If either one of the parsers produces a Failure, then the entire sequence will fail.  In terms of classical logic, this parser corresponds to the AND operation.  The code for this parser is almost ridiculously simple:

class SequenceParser[+A, +B](l: =>Parser[A], 
                             r: =>Parser[B]) extends Parser[(A, B)] {
  lazy val left = l
  lazy val right = r
 
  def apply(s: Stream[Character]) = left(s) match {
    case Success(a, rem) => right(rem) match {
      case Success(b, rem) => Success((a, b), rem)
      case f: Failure => f
    }
 
    case f: Failure => f
  }
}

This is literally just a parser which applies its left operand and then applies its right to whatever is left.  As long as both parsers succeed, a composite Success will be produced containing a tuple of the left and right parser’s results.  Note that Scala’s parser combinator framework actually yields an instance of the ~ case class from its sequence combinator.  This is particularly convenient as it allows for a very nice syntax in pattern matching for semantic actions (extracting parse results).  However, since we will not be dealing with the action combinators in this article, it seemed simpler to just use a tuple.

One item worthy of note is the fact that both left and right are evaluated lazily.  This means that we don’t actually evaluate our constructor parameters until the parser is applied to a specific stream.  This is very important as it allows us to define parsers with recursive rules.  Recursion is really what separates context-free grammars from regular expressions.  Without this laziness, any recursive rules would lead to an infinite loop in the parser construction.

Once we have the sequence combinator in hand, we can add a bit of syntax sugar to enable its use.  All instances of Parser will define a ~ operator which takes a single operand and produces a SequenceParser which handles the receiver and the parameter in order:

trait Parser[+A] extends (Stream[Character]=>Result[A]) {
  def ~[B](that: =>Parser[B]) = new SequenceParser(this, that)
}

With this modification to Parser, we can now create parsers which match arbitrary sequences of tokens.  For example, our framework so far is more than sufficient to define the classPrefix parser from our earlier snippet (with the exception of the regular expression defined in ID, which we currently have no way of handling):

def classPrefix = "class" ~ ID ~ "(" ~ formals ~ ")"

The Disjunctive Combinator

This is a very academic name for a very simple concept.  Let’s think about the framework so far.  We have both literal parsers and sequential combinations thereof.  Using this framework, we are capable of defining parsers which match arbitrary token strings.  We can even define parsers which match infinite token sequences, simply by involving recursion:

def fearTheOnes: Parser[Any] = "1" ~ fearTheOnes

Of course, this parser is absurd, since it only matches an infinite input consisting of the '1' character, but it does serve to illustrate that we have a reasonably powerful framework even in its current form.  This also provides a nice example of how the lazy evaluation of sequence parsers is an absolutely essential feature.  Without it, the fearTheOnes method would enter an infinite loop and would never return an instance of Parser.

However, for all its glitz, our framework is still somewhat impotent compared to “real” parser generators.  It is almost trivial to derive a grammar which cannot be handled by our parser combinators.  For example:

e ::= '1' | '2'

This grammar simply says “match either the '1' character, or the '2' character”.  Unfortunately, our framework is incapable of defining a parser according to this rule.  We have no facility for saying “either this or that”.  This is where the disjunctive combinator comes into play.

In boolean logic, a disjunction is defined according to the following truth table:

P Q P V Q
T T T
T F T
F T T
F F F

In other words, the disjunction is true if one or both of its component predicates are true.  This is exactly the sort of combinator we need to bring our framework to full LL(*) potential.  We need to define a parser combinator which takes two parsers as parameters, trying each of them in order.  If the first parser succeeds, we yield its value; otherwise, we try the second parser and return its Result (whether Success or Failure).  Thus, our disjunctive combinator should yield a parser which succeeds if and only if one of its component parsers succeeds.  This is very easily accomplished:

class DisParser[+A](left: Parser[A], right: Parser[A]) extends Parser[A] {
  def apply(s: Stream[Character]) = left(s) match {
    case res: Success => res
    case _: Failure => right(s)
  }
}

Once again, we can beautify the syntax a little bit by adding an operator to the Parser super-trait:

trait Parser[+A] extends (Stream[Character]=>Result[A]) {
  def ~[B](that: =>Parser[B]) = new SequenceParser(this, that)
 
  def |(that: Parser[A]) = new DisParser(this, that)
}

…is that all?

It’s almost as if by magic, but the addition of the disjunctive combinator to the sequential actually turns our framework into something really special, capable of chewing through any LL(*) grammar.  Just in case you don’t believe me, consider the grammar for the pure-untyped lambda calculus, expressed using our framework (alph definition partially elided for brevity):

object LambdaCalc extends RegexpParsers {
 
  def expr: Parser[Any] = term ~ (expr | "")
 
  def term = (
      "fn" ~ id ~ "=>" ~ expr
    | id
  )
 
  val alph = "a"|"b"|"c"|...|"X"|"Y"|"Z"
  val num = "0"|"1"|"2"|"3"|"4"|"5"|"6"|"7"|"8"|"9"
 
  def id = alph ~ idSuffix
 
  def idSuffix = (
      (alph | num) ~ idSuffix
    | ""
  )
}

While this grammar may seem a bit obfuscated, it is only because I had to avoid the use of regular expressions to define the ID rule.  Instead, I used a combination of sequential and disjunctive combinators to produce a Parser which matches the desired pattern.  Note that the “...” is not some special syntax, but rather my laziness and wish to avoid a code snippet 310 characters wide.

We can also use our framework to define some other, useful combinators such as opt and * (used in the initial example). Specifically:

trait Parser[+A] extends (Stream[Character]=>Result[A]) {
  ...
 
  def *: Parser[List[A]] = (
      this ~ * ^^ { case (a, b) => a :: b }
    | "" ^^^ Nil
  )
}
 
object RegexpParsers {
  ...
 
  def opt[A](p: Parser[A]) = (
      p ^^ { Some(_) }
    | "" ^^^ None
  )
}

Readers who have managed to stay awake to this point may notice that I’m actually cheating a bit in these definitions.  Specifically, I’m using the ^^ and ^^^ combinators.  These are the semantic action combinators which I promised to avoid discussing.  However, for the sake of completeness, I’ll include the sources and leave you to figure out the rest:

trait Parser[+A] extends (Stream[Character]=>Result[A]) { outer =>
  ...
 
  def ^^[B](f: A=>B) = new Parser[B] {
    def apply(s: Stream[Character]) = outer(s) match {
      case Success(v, rem) => Success(f(v), rem)
      case f: Failure => f
    }
  }
 
  def ^^^[B](v: =>B) = new Parser[B] {
    def apply(s: Stream[Character]) = outer(s) match {
      case Success(_, rem) => Success(v, rem)
      case f: Failure => f
    }
  }
}

In short, these combinators are only interesting to people who want their parsers to give them a value upon completion (usually an AST).  In short, just about any useful application of parser combinators will require these combinators, but since we’re not planning to use our framework for anything useful, there is really no need.

One really interesting parser from our first example which is worthy of attention is the member rule.  If you recall, this was defined as follows:

def member = (
    "val" ~ ID ~ ":" ~ ID ~ "=" ~ expr
  | "var" ~ ID ~ ":" ~ ID ~ "=" ~ expr
  | "def" ~ ID ~ "(" ~ formals ~ ")" ~ ":" ~ ID ~ "=" ~ expr
  | "def" ~ ID ~ ":" ~ ID ~ "=" ~ expr
  | "type" ~ ID ~ "=" ~ ID
)

This is interesting for two reasons.  First: we have multiple disjunctions handled in the same rule, showing that disjunctive parsers chain just as nicely as do sequential.  But more importantly, our chain of disjunctions includes two parsers which have the same prefix ("def" ~ ID).  In other words, if we attempt to parse an input of “def a: B = 42“, one of these deeply nested parsers will erroneously match the input for the first two tokens.

This grammatical feature forces us to implement some sort of backtracking within our parser combinators.  Intuitively, the "def" ~ ID parser is going to successfully match “def a“, but the enclosing sequence ("def" ~ ID ~ "(") will fail as soon as the “:” token is reached.  At this point, the parser has to take two steps back in the token stream and try again with another parser, in this case, "def" ~ ID ~ ":" ~ ID ~ "=" ~ expr.  It is this feature which separates LL(*) parsing from LL(1) and LL(0).

The good news is that we already have this feature almost by accident.  Well, obviously not by accident since I put some careful planning into this article, but at no point so far did we actually set out to implement backtracking, and yet it has somehow dropped into our collective lap.  Consider once more the implementation of the disjunctive parser:

class DisParser[+A](left: Parser[A], right: Parser[A]) extends Parser[A] {
  def apply(s: Stream[Character]) = left(s) match {
    case res: Success => res
    case _: Failure => right(s)
  }
}

Notice what happens if left fails: it invokes the right parser passing the same Stream instance (s).  Recall that Stream is immutable, meaning that there is nothing left can do which could possibly change the value of s.  Each parser is merely grabbing characters from the head of the stream and then producing a new Stream which represents the remainder.  Parsers farther up the line (like our disjunctive parser) are still holding a reference to the stream prior to these “removals”.  This means that we don’t need to make any special effort to implement backtracking, it just falls out as a natural consequence of our use of the Stream data structure.  Isn’t that nifty?

Conclusion

Parser combinators are an incredibly clever bit of functional programming.  Every time I think about them, I am once again impressed by the ingenuity of their design and the simple elegance of their operation.  The fact that two combinators and a single parser can encode the vast diversity of LL(*) grammars is simply mind-boggling.  Despite their simplicity, parser combinators are capable of some very powerful parsing in a very clean and intuitive fashion.  That to me is magical.

Interop Between Java and Scala

9
Feb
2009

Sometimes, the simplest things are the most difficult to explain.  Scala’s interoperability with Java is completely unparalleled, even including languages like Groovy which tout their tight integration with the JVM’s venerable standard-bearer.  However, despite this fact, there is almost no documentation (aside from chapter 29 in Programming in Scala) which shows how this Scala/Java integration works and where it can be used.  So while it may not be the most exciting or theoretically interesting topic, I have taken it upon myself to fill the gap.

Classes are Classes

The first piece of knowledge you need about Scala is that Scala classes are real JVM classes.  Consider the following snippets, the first in Java:

public class Person {
    public String getName() {
        return "Daniel Spiewak";
    }
}

…and the second in Scala:

class Person {
  def getName() = "Daniel Spiewak"
}

Despite the very different syntax, both of these snippets will produce almost identical bytecode when compiled.  Both will result in a single file, Person.class, which contains a default, no-args constructor and a public method, getName(), with return type java.lang.String.  Both classes may be used from Scala:

val p = new Person()
p.getName()       // => "Daniel Spiewak"

…and from Java:

Person p = new Person();
p.getName();      // => "Daniel Spiewak"

In the case of either language, we can easily swap implementations of the Person class without making any changes to the call-site.  In short, you can use Scala classes from Java (as well as Java classes from Scala) without ever even knowing that they were defined within another language.

This single property is the very cornerstone of Scala’s philosophy of bytecode translation.  Wherever possible — and that being more often than not — Scala elements are translated into bytecode which directly corresponds to the equivalent feature in Java.  Scala classes equate to Java classes, methods and fields within those classes become Java methods and fields.

This allows some pretty amazing cross-language techniques.  For example, I can extend a Java class within Scala, overriding some methods.  I can in turn extend this Scala class from within Java once again with everything working exactly as anticipated:

class MyAbstractButton extends JComponent {
  private var pushed = false
 
  def setPushed(p: Boolean) {
    pushed = p
  }
 
  def getPushed = pushed
 
  override def paintComponent(g: Graphics) {
    super.paintComponent(g)
 
    // draw a button
  }
}
public class ProKitButton extends MyAbstractButton {
    // do something uniquely Apple-esque
}

Traits are Interfaces

This is probably the one interoperability note which is the least well-known.  Scala’s traits are vastly more powerful than Java’s interfaces, often leading developers to the erroneous conclusion that they are incompatible.  Specifically, traits allow method definitions, while interfaces must be purely-abstract.  Yet, despite this significant distinction, Scala is still able to compile traits into interfaces at the bytecode level…with some minor enhancements.

The simplest case is when the trait only contains abstract members.  For example:

trait Model {
  def value: Any
}

If we look at the bytecode generated by compiling this trait, we will see that it is actually equivalent to the following Java definition:

public interface Model {
    public Object value();
}

Thus, we can declare traits in Scala and implement them as interfaces in Java classes:

public class StringModel implements Model {
    public Object value() {
        return "Hello, World!";
    }
}

This is precisely equivalent to a Scala class which mixes-in the Model trait:

class StringModel extends Model {
  def value = "Hello, World!"
}

Things start to get a little sticky when we have method definitions within our traits.  For example, we could add a printValue() method to our Model trait:

trait Model {
  def value: Any
 
  def printValue() {
    println(value)
  }
}

Obviously, we can’t directly translate this into just an interface; something else will be required.  Scala solves this problem by introducing an ancillary class which contains all of the method definitions for a given trait.  Thus, when we look at the translation for our modified Model trait, the result looks something like this:

public interface Model extends ScalaObject {
    public Object value();
 
    public void printValue();
}
 
public class Model$class {
    public static void printValue(Model self) {
        System.out.println(self.value());
    }
}

Thus, we can get the effect of Scala’s powerful mixin inheritance within Java by implementing the Model trait and delegating from the printValue() method to the Model$class implementation:

public class StringModel implements Model {
    public Object value() {
        return "Hello, World!";
    }
 
    public void printValue() {
        Model$class.printValue(this);
    }
 
    // method missing here (see below)
}

It’s not perfect, but it allows us to use some of Scala’s more advanced trait-based functionality from within Java.  Incidentally, the above code does compile without a problem.  I wasn’t actually aware of this fact, but “$” is a legal character in Java identifiers, allowing interaction with some of Scala’s more interesting features.

There is, however, one little wrinkle that I’m conveniently side-stepping: the $tag method.  This is a method defined within the ScalaObject trait designed to help optimize pattern matching.  Unfortunately, it also means yet another abstract method which must be defined when implementing Scala traits which contain method definitions.  The correct version of the StringModel class from above actually looks like the following:

public class StringModel implements Model {
    public Object value() {
        return "Hello, World!";
    }
 
    public void printValue() {
        Model$class.printValue(this);
    }
 
    public int $tag() {
        return 0;
    }
}

To be honest, I’m not sure what is the “correct” value to return from $tag.  In this case, 0 is just a stub, and I’m guessing a safe one since StringModel is the only subtype of Model.  Can anyone who knows more about the Scala compiler shed some light on this issue?

Generics are, well…Generics

Generics are (I think) probably the coolest and most well-done part of Scala’s Java interop.  Anyone who has more than a passing familiarity with Scala will know that its type system is significantly more powerful than Java’s.  Some of this power comes in the form of its type parameterization, which is vastly superior to Java’s generics.  For example, type variance can be handled at declaration-site, rather than only call-site (as in Java):

abstract class List[+A] {
  ...
}

The + notation prefixing the A type parameter on the List class means that List will vary covariantly with its parameter.  In English, this means that List[String] is a subtype of List[Any] (because String is a subtype of Any).  This is a very intuitive relationship, but one which Java is incapable of expressing.

Fortunately, Scala is able to exploit one of the JVM’s most maligned features to support things like variance and higher-kinds without sacrificing perfect Java interop.  Thanks to type erasure, Scala generics can be compiled to Java generics without any loss of functionality on the Scala side.  Thus, the Java translation of the List definition above would be as follows:

public abstract class List<A> {
    ...
}

The variance annotation is gone, but Java wouldn’t be able to make anything of it anyway.  The huge advantage to this translation scheme is it means that Java’s generics and Scala’s generics are one and the same at the bytecode level.  Thus, Java can use generic Scala classes without a second thought:

import scala.Tuple2;
 
...
Tuple2<String, String> me = new Tuple2<String, String>("Daniel", "Spiewak");

Obviously, this is a lot more verbose than the Scala equivalent, “("Daniel", "Spiewak")“, but at least it works.

Operators are Methods

One of the most obvious differences between Java and Scala is that Scala supports operator overloading.  In fact, Scala supports a variant of operator overloading which is far stronger than anything offered by C++, C# or even Ruby.  With very few exceptions, any symbol may be used to define a custom operator.  This provides tremendous flexibility in DSLs and even your average, every-day API (such as List and Map).

Obviously, this particular language feature is not going to translate into Java quite so nicely.  Java doesn’t support operator overloading of any variety, much less the ├╝ber-powerful form defined by Scala.  Thus, Scala operators must be compiled into an entirely non-symbolic form at the bytecode level, otherwise Java interop would be irreparably broken, and the JVM itself would be unable to swallow the result.

A good starting place for deciding on this translation is the way in which operators are declared in Scala: as methods.  Every Scala operator (including unary operators like !) is defined as a method within a class:

abstract class List[+A] {
  def ::[B >: A](e: B) = ...
 
  def +[B >: A](e: B) = ...
}

Since Scala classes become Java classes and Scala methods become Java methods, the most obvious translation would be to take each operator method and produce a corresponding Java method with a heavily-translated name.  In fact, this is exactly what Scala does.  The above class will compile into the equivalent of this Java code:

public abstract class List<A> {
    public <B super A> List<B> $colon$colon(B e) { ... }
 
    public <B super A> List<B> $plus(B e) { ... }
}

Every allowable symbol in Scala’s method syntax has a corresponding translation of the form “$trans“.  A list of supported translations is one of those pieces of documentation that you would expect to find on the Scala website.  However, alas, it is absent.  The following is a table of all of the translations of which I am aware:

Scala Operator Compiles To
= $eq
> $greater
< $less
+ $plus
- $minus
* $times
/ div
! $bang
@ $at
# $hash
% $percent
^ $up
& $amp
~ $tilde
? $qmark
| $bar
\ $bslash
: $colon

Using this table, you should be able to derive the “real name” of any Scala operator, allowing its use from within Java.  Of course, the idea solution would be if Java actually supported operator overloading and could use Scala’s operators directly, but somehow I doubt that will happen any time soon.

Odds and Ends

One final tidbit which might be useful: @BeanProperty.  This is a special annotation which is essentially read by the Scala compiler to mean “generate a getter and setter for this field”:

import scala.reflect.BeanProperty
 
class Person {
  @BeanProperty
  var name = "Daniel Spiewak"
}

The need for this annotation comes from the fact that Scala’s ever-convenient var and val declarations actually generate code which looks like the following (assuming no @BeanProperty annotation):

// *without* @BeanProperty
public class Person {
    private String name = "Daniel Spiewak";
 
    public String name() {
        return name;
    }
 
    public void name_$eq(String name) {
        this.name = name;
    }
}

This works well from Scala, but as you can see, Java-land is not quite paradise.  While it is certainly feasible to use the _$eq syntax instead of the familiar set/get/is triumvirate, it is not an ideal situation.

Adding the @BeanProperty annotation (as we have done in the earlier Scala snippet) solves this problem by causing the Scala compiler to auto-generate more than one pair of methods for that particular field.  Rather than just value and value_$eq, it will also generate the familiar getValue and setValue combination that all Java developers will know and love.  Thus, the actual translation resulting from the Person class in Scala will be as follows:

public class Person {
    private String name = "Daniel Spiewak";
 
    public String name() {
        return name;
    }
 
    public String getName() {
        return name();
    }
 
    public void name_$eq(String name) {
        this.name = name;
    }
 
    public void setName(String name) {
        name_$eq(name);
    }
}

This merely provides a pair of delegates, but it does suffice to smooth out the mismatch between Java Bean-based frameworks and Scala’s elegant instance fields.

Conclusion

This has been a whirlwind, disjoint tour covering a fairly large slice of information on how to use Scala code from within Java.  For the most part, things are all roses and fairy tales.  Scala classes map precisely onto Java classes, generics work perfectly, and pure-abstract traits correspond directly to Java interfaces.  Other areas where Scala is decidedly more powerful than Java (like operators) do tend to be a bit sticky, but there is always a way to make things work.

If you’re considering mixing Scala and Java sources within your project, I hope that this article has smoothed over some of the doubts you may have had regarding its feasibility.  As David Pollack says, Scala is really “just another Java library”.  Just stick scala-library.jar on your classpath and all of your Scala classes should be readily available within your Java application.  And given how well Scala integrates with Java at the language level, what could be simpler?

Hacking Buildr: Interactive Shell Support

12
Jan
2009

Last week, we looked at the unfortunately-unexplored topic of Scala/Java joint compilation.  Specifically, we saw several different ways in which this functionality may be invoked covering a number of different tools.  Among these tools was Buildr, a fast Ruby-based drop-in replacement for Maven with a penchant for simple configuration.  In the article I mentioned that Buildr doesn’t actually have support for the Scala joint compiler out of the box.  In fact, this feature actually requires the use of a Buildr fork I’ve been using to experiment with different extensions.  Among these extensions is a feature I’ve been wanting from Buildr for a long time: the ability to launch a pre-configured interactive shell.

For those coming from a primarily-Java background, the concept of an interactive shell may seem a bit foreign.  Basically, an interactive shell — or REPL, as it is often called — is a line-by-line language interpreter which allows you to execute snippets of code with immediate result.  This has been a common tool in the hands of dynamic language enthusiasts since the days of LISP, but has only recently found its way into the world of mainstream static languages such as Scala.

interactive-shells.png

One of the most useful applications of these tools is the testing of code (particularly frameworks) before the implementations are fully completed.  For example, when working on my port of Clojure’s PersistentVector, I would often spin up a Scala shell to quickly test one aspect or another of the class.  As a minor productivity plug, JavaRebel is a truly invaluable tool for development of this variety.

The problem with this pattern of work is it requires the interactive shell in question to be pre-configured to include the project’s output directory on the CLASSPATH.  While this isn’t usually so bad, things can get very sticky when you’re working with a project which includes a large number of dependencies.  It isn’t too unreasonable to imagine shell invocations stretching into the tens of lines, just to spin up a “quick and dirty” test tool.

Further complicating affairs is the fact that many projects do away with the notion of fixed dependency paths and simply allow tools like Maven or Buildr to manage the CLASSPATH entirely out of sight.  In order to fire up a Scala shell for a project with any external dependencies, I must first manually read my buildfile, parsing out all of the artifacts in use.  Then I have to grope about in my ~/.m2/repository directory until I find the JARs in question.  Needless to say, the productivity benefits of this technique become extremely suspect after the first or second expedition.

For this reason, I strongly believe that the launch of an interactive shell should be the responsibility of the tool managing the dependencies, rather than that of the developer.  Note that Maven already has some support for shells in conjunction with certain languages (Scala among them), but it is as crude and verbose as the tool itself.  What I really want is to be able to invoke the following command and have the appropriate shell launched with a pre-configured CLASSPATH.  I shouldn’t have to worry about the language of my project, the location of my repository or even if the shell requires extra configuration on my platform.  The idea is that everything should all work auto-magically:

$ buildr shell

This is exactly what the interactive-shell branch of my Buildr fork is designed to accomplish.  Whenever the shell task is invoked, Buildr looked through the current project and attempts to guess the language involved.  This guesswork is required for a number of other features, so Buildr is actually pretty accurate in this area.  If the language in question is Groovy or Scala, then the desired shell is obvious.  Java does not have an integrated shell, which means that the default behavior on a Java project would be to raise an error.

However, the benefits of interactive shells are not limited to just the latest-and-greatest languages.  I often use a Scala shell with Java projects, and for certain things a JRuby shell as well (jirb).  Thus, my interactive shell extension also provides a mechanism to allow users to override the default shell on a per-project basis:

define 'my-project' do
  shell.using :clj
end

With this configuration, regardless of the language used by the compiler for “my-project”, Buildr will launch the Clojure REPL whenever the shell task is invoked.  The currently supported shells and their corresponding Buildr identifiers:

  • Clojure’s REPL — :clj
  • Groovy’s Shell — :groovysh
  • JRuby’s IRB — :jirb
  • Scala’s Shell — :scala

It is also possible to explicitly launch a specific shell.  This is useful for situations where you might want to use the Scala shell for testing some things and the JRuby IRB for quickly prototyping other ideas (I do this a lot).  The command to launch the JIRB shell in the context of my-project would be as follows:

$ buildr my-project:shell:jirb

As a special value-added feature, all of these shells (except for Groovy’s, which is weird) will be automatically configured to use JavaRebel for the project compilation target classes if it can be automatically detected.  This detection is performed by examining REBEL_HOME, JAVA_REBEL, JAVAREBEL and JAVAREBEL_HOME environment variables in order.  If any one of these points to a directory which contains javarebel.jar or points directly to javarebel.jar itself, the configuration is assumed and the respective shell invocation is appropriately modified.

javarebel-integration.png

Best of all, this support is implemented using a highly-extensible framework similar to Buildr’s own Compiler API.  It’s very easy for plugin implementors or even average developers to simply drop-in a new shell provider, perhaps for an internal language or even some unexpected application.  The core functionality of shell detection is integrated into Buildr itself, but this in no way hampers extensibility.  For example, I could easily create a third-party .rake plugin for Buildr which added support for a whole new language (e.g. Haskell).  In this plugin, I could also define a new shell provider which would be the default for projects using that language (e.g. GHCi).

Open Question

The good news is that this feature has been discussed extensively on the buildr-user mailing-list and the prevailing opinion seems to be that it should be folded into the main Buildr distribution.  Exactly what form this will take has yet to be decided.  The bad news is that there is still some dispute about a fundamental aspect of this feature’s operation.

The question revolves around what the exact behavior should be when the shell task is invoked.  Should Buildr detect the project (or sub-project) you are in and automatically configure the shell’s CLASSPATH accordingly?  This would give the interactive shell access to different classes depending on the current working directory.  Alternatively, should there be one all-powerful shell per-buildfile configured at the root level?  This would allow your shell to remain consistent throughout the project, regardless of your current directory.  However, it would also mean that some configuration would be required in order to enable the functionality.  (more details of this debate can be found on the mailing-list).

Additionally, what should the exact syntax be for invoking a specific shell?  Rake 0.8 allows tasks to take parameters enclosed within square brackets.  Thus, the syntax would be something more like the following:

$ buildr collection:shell[jirb]

In some sense, this is more logical since it reflects the fact that a single task, shell, is taking care of the work of invoking stuff.  On the other hand, it’s a little less consistent with the rest of Buildr’s tasks, particularly things like “test:TestClass” and so on.  This too is a matter which has yet to be settled.

All in all, this is a pretty experimental branch which is very open (and desirous) of outside input.  How would you use a feature like this?  Is there anything missing from what I have presented?  What design path should be we take with regards to project-local vs global shell configurations?

If you feel like adding your voice to the chorus, feel free to leave a comment or (better yet) post a reply on the mailing-list thread.  You’re also perfectly free to fork my remote branch at GitHub to better experiment with things yourself.  The root of the whole plate of spaghetti is the lib/buildr/shell.rb file.  Bon appetit!

Gun for Hire (off topic)

7
Jan
2009

Just in case you thought Christmas was over, I have a late gift for the world: I’m available for hire!  Ok, so maybe this wasn’t exactly the stocking-stuffer you were expecting, but it’s the thought that counts.

I’m announcing my availability for employment as a part-time developer.  Those of you who follow this blog are probably already familiar with my areas of expertise, so I don’t think there is a need to bore you with a rehash.  Resume available on request!

Anyway, my preference would be a project where I get to use multiple different languages, particularly Scala and Clojure, but I’m flexible.  If you think my skills would make a positive addition to your team, shoot me an email!

Joint Compilation of Scala and Java Sources

5
Jan
2009

One of the features that the Groovy people like to flaunt is the joint compilation of .groovy and .java files.  This is a fantastically powerful concept which (among other things) allows for circular dependencies between Java, Groovy and back again.  Thus, you can have a Groovy class which extends a Java class which in turn extends another Groovy class.

All this is old news, but what you may not know is the fact that Scala is capable of the same thing.  The Scala/Java joint compilation mode is new in Scala 2.7.2, but despite the fact that this release has been out for more than two months, there is still a remarkable lack of tutorials and documentation regarding its usage.  Hence, this post…

Concepts

For starters, you need to know a little bit about how joint compilation works, both in Groovy and in Scala.  Our motivating example will be the following stimulating snippet:

// foo.scala
class Foo
 
class Baz extends Bar

…and the Java class:

// Bar.java
public class Bar extends Foo {}

If we try to compile foo.scala before Bar.java, the Scala compiler will issue a type error complaining that class Bar does not exist.  Similarly, if we attempt the to compile Bar.java first, the Java compiler will whine about the lack of a Foo class.  Now, there is actually a way to resolve this particular case (by splitting foo.scala into two separate files), but it’s easy to imagine other examples where the circular dependency is impossible to linearize.  For the sake of example, let’s just assume that this circular dependency is a problem and cannot be handled piece-meal.

In order for this to work, either the Scala compiler will need to know about class Bar before its compilation, or vice versa.  This implies that one of the compilers will need to be able to analyze sources which target the other.  Since Scala is the language in question, it only makes sense that it be the accommodating one (rather than javac).

What scalac has to do is literally parse and analyze all of the Scala sources it is given in addition to any Java sources which may also be supplied.  It doesn’t need to be a full fledged Java compiler, but it does have to know enough about the Java language to be able to produce an annotated structural AST for any Java source file.  Once this AST is available, circular dependencies may be handled in exactly the same way as circular dependencies internal to Scala sources (because all Scala and all Java classes are available simultaneously to the compiler).

Once the analysis phase of scalac has blessed the Scala AST, all of the Java nodes may be discarded.  At this point, circular dependencies have been resolved and all type errors have been handled.  Thus, there is no need to carry around useless class information.  Once scalac is done, both the Foo and the Baz classes will have produced resultant Foo.class and Baz.class output files.

However, we’re still not quite done yet.  Compilation has successfully completed, but if we try to run the application, we will receive a NoClassDefFoundError due to the fact that the Bar class has not actually been compiled.  Remember, scalac only analyzed it for the sake of the type checker, no actual bytecode was produced.  Bar may even suffer from a compile error of some sort, as long as this error is within the method definitions, scalac isn’t going to catch it.

The final step is to invoke javac against the .java source files (the same ones we passed to scalac) adding scalac’s output directory to javac’s classpath.  Thus, javac will be able to find the Foo class that we just compiled so as to successfully (hopefully) compile the Bar class.  If all goes well, the final result will be three separate files: Foo.class, Bar.class and Baz.class.

Usage

Although the concepts are identical, Scala’s joint compilation works slightly differently from Groovy’s from a usage standpoint.  More specifically: scalac does not automatically invoke javac on the specified .java sources.  This means that you can perform “joint compilation” using scalac, but without invoking javac you will only receive the compiled Scala classes, the Java classes will be ignored (except by the type checker).  This design has some nice benefits, but it does mean that we usually need at least one extra command in our compilation process.

All of the following usage examples assume that you have defined the earlier example in the following hierarchy:

  • src
    • main
      • java
        • Bar.java
      • scala
        • foo.scala
  • target
    • classes

Command Line

# include both .scala AND .java files
scalac -d target/classes src/main/scala/*.scala src/main/java/*.java

javac -d target/classes \
      -classpath $SCALA_HOME/lib/scala-library.jar:target/classes \
       src/main/java/*.java

Ant

<target name="build">
    <scalac srcdir="src/main" destdir="target/classes">
        <include name="scala/**/*.scala"/>
        <include name="scala/**/*.java"/>
    </scalac>
 
    <javac srcdir="src/main/java" destdir="${scala.library}:target/classes" 
           classpath="target/classes"/>
</target>

Maven

One thing you gotta love about Maven: it’s fairly low on configuration for certain common tasks.  Given the above directory structure and the most recent version of the maven-scala-plugin, the following command should be sufficient for joint compilation:

mvn compile

Unfortunately, there have been some problems reported with the default configuration and complex inter-dependencies between Scala and Java (and back again).  I’m not a Maven…maven, so I can’t help too much, but as I understand things, this POM fragment seems to work well:

<plugin>
    <groupId>org.scala-tools</groupId>
    <artifactId>maven-scala-plugin</artifactId>
 
    <executions>
        <execution>
            <id>compile</id>
            <goals>
            <goal>compile</goal>
            </goals>
            <phase>compile</phase>
        </execution>
 
        <execution>
            <id>test-compile</id>
            <goals>
            <goal>testCompile</goal>
            </goals>
            <phase>test-compile</phase>
        </execution>
 
        <execution>
            <phase>process-resources</phase>
            <goals>
            <goal>compile</goal>
            </goals>
        </execution>
    </executions>
</plugin>
 
<plugin>
    <artifactId>maven-compiler-plugin</artifactId>
    <configuration>
        <source>1.5</source>
        <target>1.5</target>
    </configuration>
</plugin>

You can find more information on the mailing-list thread.

Buildr

Joint compilation for mixed Scala / Java projects has been a long-standing request of mine in Buildr’s JIRA.  However, because it’s not a high priority issue, the developers were never able to address it themselves.  Of course, that doesn’t stop the rest of us from pitching in!

I had a little free time yesterday afternoon, so I decided to blow it by hacking out a quick implementation of joint Scala compilation in Buildr, based on its pre-existing support for joint compilation in Groovy projects.  All of my work is available in my Buildr fork on GitHub.  This also includes some other unfinished goodies, so if you want only the joint compilation, clone just the scala-joint-compilation branch.

Once you have Buildr’s full sources, cd into the directory and enter the following command:

rake setup install

You may need to gem install a few packages.  Further, the exact steps required may be slightly different on different platforms.  You can find more details on Buildr’s project page.

With this highly-unstable version of Buildr installed on your unsuspecting system, you should now be able to make the following addition to your buildfile (assuming the directory structure given earlier):

require 'buildr/scala'
 
# rest of the file...

Just like Buildr’s joint compilation for Groovy, you must explicitly require the language, otherwise important things will break.  With this slight modification, you should be able to build your project as per normal:

buildr

This support is so bleeding-edge, I don’t even think that it’s safe to call it “pre-alpha”.  If you run into any problems, feel free to shoot me an email or comment on the issue.

Conclusion

Joint compilation of Java and Scala sources is a profound addition to the Scala feature list, making it significantly easier to use Scala alongside Java in pre-existing or future projects.  With this support, it is finally possible to use Scala as a truly drop-in replacement for Java without modifying the existing infrastructure beyond the CLASSPATH.  Hopefully this article has served to bring slightly more exposure to this feature, as well as provide some much-needed documentation on its use.