Skip to content

What is Hindley-Milner? (and why is it cool?)


Anyone who has taken even a cursory glance at the vast menagerie of programming languages should have at least heard the phrase “Hindley-Milner”.  F#, one of the most promising languages ever to emerge from the forbidding depths of Microsoft Research, makes use of this mysterious algorithm, as do Haskell, OCaml and ML before it.  There is even some research being undertaken to find a way to apply the power of HM to optimize dynamic languages like Ruby, JavaScript and Clojure.

However, despite widespread application of the idea, I have yet to see a decent layman’s-explanation for what the heck everyone is talking about.  How does the magic actually work?  Can you always trust the algorithm to infer the right types?  Further, why is Hindley-Milner is better than (say) Java?  So, while those of you who actually know what HM is are busy recovering from your recent aneurysm, the rest of us are going to try to figure this out.

Ground Zero

Functionally speaking, Hindley-Milner (or “Damas-Milner”) is an algorithm for inferring value types based on use.  It literally formalizes the intuition that a type can be deduced by the functionality it supports.  Consider the following bit of psuedo-Scala (not a flying toy):

def foo(s: String) = s.length
// note: no explicit types
def bar(x, y) = foo(x) + y

Just looking at the definition of bar, we can easily see that its type must be (String, Int)=>Int.  As humans, this is an easy thing for us to intuit.  We simply look at the body of the function and see the two uses of the x and y parameters.  x is being passed to foo, which expects a String.  Therefore, x must be of type String for this code to compile.  Furthermore, foo will return a value of type Int.  The + method on class Int expects an Int parameter; thus, y must be of type Int.  Finally, we know that + returns a new value of type Int, so there we have the return type of bar.

This process is almost exactly what Hindley-Milner does: it looks through the body of a function and computes a constraint set based on how each value is used.  This is what we were doing when we observed that foo expects a parameter of type String.  Once it has the constraint set, the algorithm completes the type reconstruction by unifying the constraints.  If the expression is well-typed, the constraints will yield an unambiguous type at the end of the line.  If the expression is not well-typed, then one (or more) constraints will be contradictory or merely unsatisfiable given the available types.

Informal Algorithm

The easiest way to see how this process works is to walk it through ourselves.  The first phase is to derive the constraint set.  We start by assigning each value (x and y) a fresh type (meaning “non-existent”).  If we were to annotate bar with these type variables, it would look something like this:

def bar(x: X, y: Y) = foo(x) + y

The type names are not significant, the important restriction is that they do not collide with any “real” type.  Their purpose is to allow the algorithm to unambiguously reference the yet-unknown type of each value.  Without this, the constraint set cannot be constructed.

Next, we drill down into the body of the function, looking specifically for operations which impose some sort of type constraint.  This is a depth-first traversal of the AST, which means that we look at operations with higher-precedence first.  Technically, it doesn’t matter what order we look; I just find it easier to think about the process in this way.  The first operation we come across is the dispatch to the foo method.  We know that foo is of type String=>Int, and this allows us to add the following constraint to our set:

X  \mapsto  String

The next operation we see is +, involving the y value.  Scala treats all operators as method dispatch, so this expression actually means “foo(x).+(y).  We already know that foo(x) is an expression of type Int (from the type of foo), so we know that + is defined as a method on class Int with type Int=>Int (I’m actually being a bit hand-wavy here with regards to what we do and do not know, but that’s an unfortunate consequence of Scala’s object-oriented nature).  This allows us to add another constraint to our set, resulting in the following:

X  \mapsto  String

Y  \mapsto  Int

The final phase of the type reconstruction is to unify all of these constraints to come up with real types to substitute for the X and Y type variables.  Unification is literally the process of looking at each of the constraints and trying to find a single type which satisfies them all.  Imagine I gave you the following facts:

  • Daniel is tall
  • Chris is tall
  • Daniel is red
  • Chris is blue

Now, consider the following constraints:

Person1 is tall

Person1 is red

Hmm, who do you suppose Person1 might be?  This process of combining a constraint set with some given facts can be mathematically formalized in the guise of unification.  In the case of type reconstruction, just substitute “types” for “facts” and you’re golden.

In our case, the unification of our set of type constraints is fairly trivial.  We have exactly one constraint per value (x and y), and both of these constraints map to concrete types.  All we have to do is substitute “String” for “X” and “Int” for “Y” and we’re done.

To really see the power of unification, we need to look at a slightly more complex example.  Consider the following function:

def baz(a, b) = a(b) :: b

This snippet defines a function, baz, which takes a function and some other parameter, invoking this function passing the second parameter and then “cons-ing” the result onto the second parameter itself.  We can easily derive a constraint set for this function.  As before, we start by coming up with type variables for each value.  Note that in this case, we not only annotate the parameters but also the return type.  I sort of skipped over this part in the earlier example since it only sufficed to make things more verbose.  Technically, this type is always inferred in this way.

def baz(a: X, b: Y): Z = a(b) :: b

The first constraint we should derive is that a must be a function which takes a value of type Y and returns some fresh type Y’ (pronounced like “why prime“).  Further, we know that :: is a function on class List[A] which takes a new element A and produces a new List[A].  Thus, we know that Y and Z must both be List[Y'].  Formalized in a constraint set, the result is as follows:

X  \mapsto  (Y=>Y’ )

Y  \mapsto  List[Y' ]

Z  \mapsto  List[Y' ]

Now the unification is not so trivial.  Critically, the X variable depends upon Y, which means that our unification will require at least one step:

X  \mapsto  ( List[Y' ]=>Y’ )

Y  \mapsto  List[Y' ]

Z  \mapsto  List[Y' ]

This is the same constraint set as before, except that we have substituted the known mapping for Y into the mapping for X.  This substitution allows us to eliminate X, Y and Z from our inferred types, resulting in the following typing for the baz function:

def baz(a: List[Y']=>Y', b: List[Y']): List[Y'] = a(b) :: b

Of course, this still isn’t valid.  Even assuming that Y' were valid Scala syntax, the type checker would complain that no such type can be found.  This situation actually arises surprisingly often when working with Hindley-Milner type reconstruction.  Somehow, at the end of all the constraint inference and unification, we have a type variable “left over” for which there are no known constraints.

The solution is to treat this unconstrained variable as a type parameter.  After all, if the parameter has no constraints, then we can just as easily substitute any type, including a generic.  Thus, the final revision of the baz function adds an unconstrained type parameter “A” and substitutes it for all instances of Y’ in the inferred types:

def baz[A](a: List[A]=>A, b: List[A]): List[A] = a(b) :: b


…and that’s all there is to it!  Hindley-Milner is really no more complicated than all of that.  One can easily imagine how such an algorithm could be used to perform far more complicated reconstructions than the trivial examples that we have shown.

Hopefully this article has given you a little more insight into how Hindley-Milner type reconstruction works under the surface.  This variety of type inference can be of immense benefit, reducing the amount of syntax required for type safety down to the barest minimum.  Our “bar” example actually started with (coincidentally) Ruby syntax and showed that it still had all the information we needed to verify type-safety.  Just a bit of information you might want to keep around for the next time someone suggests that all statically typed languages are overly-verbose.

The Joy of Concatenative Languages Part 3: Kindly Types


In parts one and two of this series, we dipped our toes into the fascinating world that is stack-based languages.  By this point, you should be fairly familiar with how to construct simple algorithms using Cat (the language we have been working with) as well as the core terminology of the paradigm.  In fact, with just the information given so far, you could probably go on to be productive with a real-world concatenative language like Factor.  However, the interest does not just stop there…

One of the interesting challenges in programming language design is the construction of a type system.  So as to clear up any possible misconception before it arises, this is how Pierce defines such a thing:

A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.

For Java, which has a comparatively weak type system, this usually means preventing you from accidentally using a String as if it were an int.  In other words, Java’s type system generally proves the absence of things like NoSuchMethodError and similar.  C#, which has a slightly more-powerful type system, can also prove the absence of most NullPointerException(s) when code is written in a correct and idiomatic fashion.  Scala goes even further with pattern matching…need I go on?  The point is that type systems do different things in different languages, so the definition needs to be flexible enough to reflect that.

In this article, we’re going to look at how we can define a type system for a functional (meaning that we have quotations) concatenative language.  In a comment on the first part of this series, it was suggested that the task of typing stack-based languages is a fairly trivial one.  This is true, but only to a certain point.  As we will see, there are dragons lurking in the conceptual shadows, waiting for us to disturb their sleep.

Simple Expressions

Let’s start out with typing something simple.  Consider the following program:


For those of you reading the RSS, what you see between the previous paragraph and this is exactly what I intended to write: nothing at all.  In a concatenative language, the empty program is usually considered to be valid.  After all, it takes a stack as input and returns the exact same stack.  We could replicate the semantics of this program by writing “dup pop“, but why bother?

The empty program has the following type:


Or, more properly:

('A -> 'A)

To the left of the -< we have what I like to call the “input constraints”: what types must be on the stack coming into the program (or phrase).  To the right of the arrow are the “output constraints”: what types will be on the stack when we’re done.  For reasons which will become clear later on, 'A in this case represents the whole input stack (regardless of what it contains).  Since we never change anything on the stack (the program is, after all, empty), the output stack has whatever type the input stack was given.  Another way of writing this type would be as follows:

* -> *

This literally symbolizes our intuition that the empty program has no input or output constraints.  However, this is somewhat less correct notationally since it implies that the input and output stacks are unrelated.  In fact, I would go so far as to say that this notation is wrong.  The only reason it is produced here is to serve as a memory aid.  For the remainder of the article, we will be using Cat’s notation for types.

Let’s look at something a little less trivial.  Consider the following program one word at a time:

1 2 +

Remember that an integer literal (or any literal for that matter) is just a function which pushes a specific constant onto the stack.  Let’s assign types based on what we expect the input/output constraints of these functions to be.  Note: I will be using the colon (:) notation to denote a type.  This isn’t conventional coming from C-land, but it is the gold standard of formal type theory:

1 : ('A -> 'A Int)
2 : ('A -> 'A Int)
+ : ('A Int Int -> 'A Int)

This is all very intuitive.  Integer literals work on any stack and just produce that stack with a new Int pushed onto the top.  Both 1 and 2 have the same type, which is a good sign that we’re on the right track.

The + word is a little more interesting.  Its runtime semantics are as follows: pop two integers off the stack, add them together and then push the result back on.  This word will not be able to execute without both integer values on the top of the stack.  Thus, it only makes sense that its input constraints be some stack with two values of type Int at the top.  Likewise, when we’re done, those two integers will be gone and a new Int will be pushed onto the remainder of the stack which was given to us.  Remember that 'A represents any stack, even if it is completely empty.

Coming back to our program, we can see that it is well typed by simply string together the types we have generated.  Starting from the top (using * to symbolize the empty stack):

Word Input Stack Output Stack
1 * * Int
2 * Int * Int Int
+ * Int Int * Int

Do you see how the input stack of each word matches the output stack of the previous?  In this case, this sort of one-to-one matching indicates that the program is well-typed, producing a final stack with a single Int on it.  If we actually run this program, we would see that the evaluation matches the assigned types.

First-Order Functions

This is fine for a simple addition program, but what if we throw functions into the mix?  Consider the same program we just analyzed wrapped up within a function:

define addSome {
  1 2 +

Here we define a function which has as a body the program we have already analyzed.  Down at the bottom of our new program, we actually call this function.  Here is the question: what type does the addSome word have?

To answer this question, look back at the table above and consider the Input Stack for the first word in concert with the Output Stack for the last.  Putting these two types together yields the following type for the aggregated whole:

1 2 + : ('A -> 'A Int)

These words (or “phrase”) takes any stack as input, and then through some manipulation produces a single Int on top of that stack as a result.  The stack may grow and shrink within the function, but at the end of the day, only the Int remains.  As we would expect, this matches the runtime semantics perfectly.

Given the fact that the phrase “1 2 +” has the type ('A -> 'A Int), it is reasonable to assign that same type to the function which contains it.  Thus, we can type-check the addSome program in a simple, one-row table:

Word Input Stack Output Stack
addSome * * Int

At the start of execution, the input stack to any program is *, or the empty stack.  However, this is fine with our type checker, since the program has 'A — or any stack — for its input parameters.

This is all so nice and intuitive, so let’s consider the case where we have a function which actually takes some parameters.  Specifically, let’s consider the following definition:

define addTwice {
  + +

At runtime, this function will take three values off the stack and then add them all together.  It is the Cat equivalent of the following in Scala:

def addTwice(a: Int, b: Int, c: Int) = a + b + c

The question is: how do we assign this (the Cat function) a type?  As we have done before, let’s look at the types of the individual words:

+ : ('A Int Int -> 'A Int)
+ : ('A Int Int -> 'A Int)

Not much help there.  Let’s try making a table:

Word Input Stack Output Stack
+ * Int Int * Int
+ * Int Int * Int

It’s tempting to look at this and just assign addTwice the type of ('A Int Int -> 'A Int).  However, this would be a mistake.  Notice the problem with our table above: the Input Stack type of the second word does not match the Output Stack of the first.  In other words, this program does not immediately type-check.

The problem is the second word is accessing more of the stack than the first.  We’re effectively “deferring” a parameter access until later in the function, rather than grabbing everything right away and threading the processing through from start to finish.  This is a perfectly reasonable pattern, but it plays havoc with our naive type system.

The solution is to merge the input constraints across both words.  The first word (+) requires two Int(s) to be on the top of the stack.  When it is done, those Int(s) are gone and a single Int has taken their place.  The second word (again +) also requires two Int(s) on the stack.  We only have one that we know of (the output Int from the first word), so we must unify the constraints and merge things back “up the chain” as it were.  In other words, our first word (+) will require not just two Int(s) on the stack but three: two for itself and one for the second word (+).  Our corrected table will look something like the following:

Word Input Stack Output Stack
+ * Int Int Int * Int Int
+ * Int Int * Int

With this new table, all of the Input and Output stacks match, which means that the type is valid and can prove runtime evaluation.  Thus, based on this whole song and dance, we can assign the following type:

addTwice : ('A Int Int Int -> 'A Int)

As expected, this function takes not two, but three Int(s) on the stack and returns the remainder of that stack with a new Int on top.

Polymorphic Words

One mildly-annoying issue that we have just skated over is the problem of polymorphism.  Consider the following two programs:

42 pop

And this…

"fourty-two" pop

The question is: what type do we assign to pop?  We can easily make the following two assertions:

42           : ('A -> 'A Int)
"fourty-two" : ('A -> 'A String)

If we attempt to use this information to type-check the first program (assuming that it is sound), we will arrive at the following type for pop:

pop : ('A Int -> 'A)

That’s intuitive, right?  All that we’re doing here is taking the first value off of the stack (an Int, in the case of the first program) and throwing it away, returning the remainder of the stack.  However, if we use this type, we will run into some serious troubles type-checking the second program:

Word Input Stack Output Stack
"fourty-two" * * String
pop * Int *

Since pop has type ('A Int -> 'A) (as we asserted above), it is inapplicable to a stack with String on top.  Note that we can’t just push these constraints “up the chain”, since it is a case of direct type mismatch, rather than a stack of insufficient depth.  In short: we’re stuck.

The only way to solve this problem is to introduce the concept of parametric types.  Literally, we need to define a type which can be instantiated against a given stack, regardless of what type happens to match the parameters in question.  Java calls this concept “generics”.  Rather than giving pop the overly-restrictive type of ('A Int -> 'A), we will instead allow the value on top of the stack to be of any type (not just Int):

pop : ('A 'a -> 'A)

Note the fact that 'A and 'a are very separate type variables in this snippet.  'A represents the “rest of the stack”, while 'a represents a specific type which just happens to be on top of the input stack.  Using this new, more flexible type, we can produce tables for both of our earlier programs:

Word Input Stack Output Stack
42 * * Int
pop * Int *


Word Input Stack Output Stack
"fourty-two" * * String
pop * String *

Everything matches and the world is once again very happy.  Note that we can also apply this parametric type concept to the slightly more interesting example of dup:

dup : ('A 'a -> 'A 'a 'a)

In other words, dup says that whatever type is on top of the stack when it starts, that type will be on top of the stack twice when it is finished.  Just like pop, this type can be instantiated against any stack with at least one type, regardless of whether that type is Int, String, or anything else for that matter.

Higher-Order Functions

We’ve seen how to type-check simple phrases, as well as first-order functions with deferred stack access and the occasional polymorphic word.  However, there is one particularly troublesome aspect of concatenative type systems which we have completely ignored: functions which take quotations off the stack.  In other words: what type do we assign to apply?  Consider the following function:

define trouble {

At runtime, trouble will pop a quotation and then evaluate it against the remainder of the stack.  Intuitively, we need to have some way of representing the type of a quotation, but that’s not even the most serious problem.  Somehow, we need to constrain the quotation to itself accept exactly the stack which remains after it is popped.  We also need to find some way of capturing its output type in order to compute the final output type of trouble.

More concretely, we can make a first attempt at assigning a type for trouble.  The underscores (_) illustrate an area where our type system is incapable of helping us:

trouble : (_ (_ -> _) -> _)

It’s very tempting to just throw an 'A in there and be done with it, but the truth is that for this type expression, there is no “unused stack”.  We don’t really know how much (or how little) of the stack will be used by the quotation; it could pop five elements, twenty or none at all.  It literally needs access to the remainder of the input stack in its entirety, otherwise the expression is useless.  Enter stack polymorphism…

Just as we needed a way to represent any single type in order to type-check pop and dup, we now need a way to represent any stack type in order to type-check apply.  Fortunately, the answer is already nestled within our pre-established notation.  Consider the type of +:

+ : ('A Int Int -> 'A Int)

We have been taking this to mean “any stack with two Int(s) on top resulting in that same stack with only one Int“.  This is true, but we’re being a little hand-wavy about the meaning of “any stack” and how it relates to 'A.  When we really get down to it, what’s happening here is 'A is being instantiated against a particular input stack, whatever that stack happens to be.  When we were type-checking + +, the first word instantiated 'A not to mean the empty stack (*), but rather a stack with at least one Int on it.  This was required to successfully type the second +.

We can very easily extend this notational convenience to represent generalized stack parameters.  Rather than being instantiated to specific types, stack parameters are instantiated to some stack in its entirety.  Just as with type parameters, wherever we see that instantiated stack parameter within a type expression, it will be replaced with whatever stack type it was assigned.  Thus, we can assign trouble the following type:

trouble : ('A ('A -> 'B) -> 'B)

In other words, trouble takes some stack A which has a quotation on top.  This quotation accepts stack A itself and returns some new stack B.  Note that we don’t really know anything about B.  It could be related to A, but it might not be.  The final result of the whole expression is this new stack B.

This concept is remarkably powerful.  With it in combination with the other types we have already examined, we can type check the entirety of Cat and be assured of the absence of type-mismatch and stack-underflow errors.  Considering the fact that Cat is almost exactly as powerful as Joy, that’s a pretty impressive feat.

From a theoretical standpoint, things get even more interesting when we consider the type of the following function:

define y {
  [dup papply] swap compose dup apply

This has the following type:

y : ('A ('A ('A -> 'B) -> 'B) -> 'B)

As you may have guessed by the name, this is the Y-combinator1, one of the most well-known mechanisms for producing recursion in a nameless system.  Note that this definition looks a little different from the pure-untyped lambda calculus (call-by-name semantics):

λf . (λx . f (x x)) (λx . f (x x))

What I’m trying to point out here is the fact that Cat is able to leverage its type system to assign a type to the Y-combinator.  This is something which is literally impossible in System F, a typed form of lambda-calculus.  In fact, the only way to type-check this function in a lambda-calculus-derivative system would be to add recursive types.  Cat is able to get by with a very much non-recursive type definition, something which I find fascinating in the extreme.

Update: The above paragraph is somewhat misleading.  It turns out that Cat actually does use a recursive type under the surface to derive the non-recursive type for y.  Specifically:

dup papply : ('A ('B ('B self -> 'C) -> 'C) -> 'A ('B -> 'C))

On a further theoretical note, the device in Cat’s type system which allows this power is in fact the stack type variable (e.g. 'A).  These stack types are conceptually quite similar to the type parameters we used in typing pop (e.g. 'a), but still in a very separate domain.  In fact, stack types have a different kind than regular types.  This is not to say that Cat employs higher-kinds such as Scala’s (e.g. * => *), but it does have two very different type kinds: stacks and values.

And yet, it is not kinds in and of themselves which allows for the typing of the Y-combinator.  Fω is essentially System F with higher-kinds, and yet it is still incapable of handling this tiny little expression.  Most interesting indeed…


As you can see, type systems and concatenative languages do fit together nicely, but it takes a lot more effort than one would initially expect.  While typing simple expressions is easy enough, the waters are muddied as soon as higher-order functions and even deferred stack access enters the mix.  This is an extremely fertile area for research, where a lot of interesting ideas are being developed.  For example, John Nowak’s 5th attempts to apply a type system to the stack-based paradigm, but in a very different way than Cat.

I hope you enjoyed this mini-series of articles on concatenative languages.  While they are a bit of a backwater in the programming language menagerie, I think that studying them can be a very instructive experience.  Furthermore, there remain some problems that are very nicely expressed in languages like Cat while being extremely unwieldy in more conventional languages like Scala.  Despite the obscurity of concatenative languages, it never hurts to have an extra language on hand, ready for those times when it really is the best tool for the job.

1 Technically, this is a little different from the Y-combinator used in conventional lambda-calculus (it executes the quotation rather than returning a fixed-point).  However, conceptually it is the same idea.

The Joy of Concatenative Languages Part 2: Innately Functional


In part one of this series, I introduced the concept of a stack-based language and in particular the syntax and rough ideas behind Cat.  However, to anyone coming into concatenative land for the first time, my examples likely seemed both odd and unconvincing.  After all, why would you ever use point-free programming when everyone else seems to be sold on the idea of name binding?  More importantly, where do these languages fit in with our established menagerie of language paradigms?

The answer to the first question really depends on the situation.  I personally think that the best motivation for concatenative languages is their syntax.  If you want to create an internal DSL, there will be no language better suited to it than one which is concatenative, Cat, Factor or otherwise.  This is because stack-oriented languages can get away with almost no syntax whatsoever.  They say that Lisp is a syntax-free language, but this holds even more strongly for languages like Cat.  Well, that and you don’t have to deal with all the parentheses…

The second question is (I think) the more interesting one: how do we classify these languages and what sort of methodologies should we apply?  At first glance, Cat (and other languages like it) seem to be quite imperative in nature.  After all, you have a single mutable stack that any function can modify.  However, if you turn your head sideways and blink twice, you begin to realize that concatenative languages are really much closer to the functional side of the oyster.

Consider the following Cat program:

define plus { + }
define minus { - }
7 2 3
plus minus

Trivial, but to the point.  This program first adds the numbers 2 and 3, then subtracts the result from 7.  Thus, the final result is a value of 2 on the stack.  The only twist is that we have defined functions plus and minus to do the dirty work for us.  This wasn’t strictly necessary, but I wanted to emphasize that + and - really are functions.  We could express the exact same program in Scala:

def plus(a: Int, b: Int) = a + b
def minus(a: Int, b: Int) = a - b
minus(7, plus(2, 3))

Do you see how the consecutive invocations of plus and minus in Cat became composed invocations in Scala?  This is where the term “concatenative language” derives from: the whole program is just a series of function compositions.  Wikipedia’s article on Cat has a very nice, mathematical description:

Two adjacent terms in Cat imply the composition of functions that generate stacks, so the Cat program f g is equivalent to the mathematical expressions and , where x is the stack input to the expression.

Strictly speaking, a concatenative language could be implemented without a stack, but such an implementation would likely be a bit harder to use than the average stack-based language.

Coming back to my original premise: concatenative languages are functional in nature.  Absolutely everything in Cat is a function.  Operators, words, even numeric literals like “3” are actually functions at the conceptual level.  Additionally, Cat, Joy and Factor all offer a mechanism for treating functions as first-class values:

2 3
[ + ]

The square-bracket ([]) syntax is representative of a quotation.  Literally this mean “create a function of the enclosed words and place it as a value on the stack”.  We can pop this function off the stack and invoke it by using the apply word.  Incidentally, you may have noticed that this syntax is remarkably close to that which is used in if conditionals:

5 0 <
[ "strange math" ]
[ "all is well" ]

This syntax works because if isn’t conceptually a language primitive: it’s just another function which happens to take a boolean and two quotations off the stack.  For the sake of efficiency, Cat does indeed implement if as a primitive, but this was a deliberate optimization rather than an implementation forced by the language design.  Untyped Cat (see Part 3) is equivalent in power to the pure-untyped lambda calculus, and as our friend Alonzo Church showed us, if-style conditionals are easily accomplished:

TRUE = λa . λb . a
FALSE = λa . λb . b

IF = λp . λt . λe . p t e

Yeah, maybe we’re drifting a bit off-point here…

Higher-Order Programming

So if Cat is just another functional programming language, then we should be able to implement all of those higher-order design patterns that we’ve come to know and love in languages like Scala and ML.  To see how, let’s look at implementing some simple list manipulation functions in Cat.  The easiest would be to start with append, which pops two lists off of the stack and pushes a new list which is the end-to-end concatenation of the two originals:

define append {
  [ pop ]
    [append] dip

This function first starts by checking to see if the top list is empty.  If so, then just pop it off the stack and leave the other right where it is.  Appending an empty list should always yield the original list.  However, if the head list is not empty, then we need to work a bit.  First, we decompose it into its tail and head, which are pushed onto the stack in order by the uncons function.  Next, we need to recursively append the tail with our second list on the stack.  However, the head of the list from uncons is in the way on top of the stack.  We could use stack manipulation to move things around and get our lists up to the head of the stack, but dip provides us with a handy, higher-order shortcut.  We temporarily remove the top of the stack, invoke the quotation “[append]” against the remainder and then push the old top back on top of the result.

The dip operation is surprisingly powerful, making it possible to completely live without either variables or multiple stacks.  Any non-trivial Cat program will need to make use of this handy function at some level.

Once we have the old head and the new appended-list on the stack, all we need to do is put them back together using cons.  This function leaves a new list on the stack in place of the old list and head element.  This Cat program is almost precisely analogous to the following ML:

fun append ls nil = ls
  | append ls (hd :: tail) = hd :: (append ls tail)

Personally, I find the ML a lot easier to read, but that’s just me.  Obviously it’s a lot shorter, but as it turns out, our Cat implementation, while intuitive, was sub-optimal.  Cat already implements append in the guise of the cat function, and it is far more concise than what I showed:

define cat {
  swap [cons] rfold

It’s almost frightening how short this is: only three words.  It’s not as if rfold is doing anything mysterious either; it’s just a simple right-fold function that takes a list, an initial value and a quotation, producing a result by traversing the entire list.  We can use something similar back in ML-land, achieving an implementation which is arguably equivalent in subjective elegance:

val append = foldr (op::)

Moving on, we can also implement a length function in Cat, this time using fold to tighten things up:

define length {
  0 [ pop 1 + ] fold

You’ll notice that we have to mess around a bit in the quotation in order to avoid the first “parameter”, the current element of the list (which we do not need).  Expressing this in ML yields a very similar degree of cruft:

val length = foldl (fn (n, _) => n + 1) 0


The important take-away from this tangled morass of an article is the fact that Cat is a highly functional language, capable of easily keeping up with some of the stalwart champions of the paradigm.  More significantly, this is a trait which is shared by all concatenative languages.  Rather than throwing away all of the old wisdom learned in language design, stack-based languages build on it by providing an alternative view into the world of functions.

In the next (and final) article of the series, we will take a brief look at the challenges of applying a type system to a concatenative language and the fascinating techniques used by Cat to achieve just that.

The Joy of Concatenative Languages Part 1


Concatenative languages like Forth have been around for a long time.  Hewlett-Packard famously employed a stack-based language called “RPL” on their HP-28 and HP-48 calculators, bringing the concept of Reverse Polish Notation to the mainstream…or as close to the mainstream as a really geeky toy can get.  Surprisingly though, these languages have not seen serious adoption beyond the experimental and embedded device realms.  And by “adoption”, I mean real programmers writing real code, not this whole interpreted bytecode nonsense.

This is a shame, because stack-based languages have a remarkable number of things to teach us.  Their superficial distinction from conventional programming languages very quickly gives way to a deep connection, particularly with functional languages.  However, if we dig even deeper, we find that this similarity has its limits.  There are some truly profound nuggets of truth waiting to be uncovered within these murky depths.  Shall we?

Trivial aside: I’m going to use the terms “concatenative” and “stack-based” interchangeably through the article.  While these are most definitely related concepts, they are not exactly synonyms.  Bear that in mind if you read anything more in-depth on the subject.

The Basics

Before we look at some of those “deeper truths of which I speak, it might be helpful to at least understand the fundamentals of stack-based programming.  From Wikipedia:

The concatenative or stack-based programming languages are ones in which the concatenation of two pieces of code expresses the composition of the functions they express. These languages use a stack to store the arguments and return values of operations.

Er, right.  I didn’t find that very helpful either.  Let’s try again…

Stack-based programming languages all share a common element: an operand stack.  Consider the following program:


Yes, this is a real program.  You can copy this code and run/compile it unmodified using most stack-based languages.  However, for reasons which will become clear later in this series, I will be using Cat for most of my examples.  Joy and Factor would both work well for the first two parts, but for part three we’re going to need some rather unique features.

Returning to our example: all this will do is take the numeric value of 2 and push it onto the operand stack.  Since there are no further words, the program will exit.  If we want, we can try something a little more interesting:

2 3 +

This program first pushes 2 onto the stack, then 3, and finally it pops the top two values off of the stack, adds them together and pushes the result.  Thus, when this program exits, the stack will only contain 5.

We can mix and match these operations until we’re blue in the face, but it’s still not a terribly interesting language.  What we really need is some sort of flow control.  To do that, we need to understand quotations.  Consider the following Scala program:

val plus = { (x: Int, y: Int) => x + y }
plus(2, 3)

Notice how rather than directly adding 2 and 3, we first create a closure/lambda which encapsulates the operation.  We then invoke this closure, passing 2 and 3 as arguments.  We can emulate these exact semantics in Cat:

2 3
[ + ]

The first line pushes 2 and 3 onto the stack.  The second line uses square brackets to define a quotation, which is Cat’s version of a lambda.  Note that it isn’t really a closure since there are no variables to enclose.  Joy and Factor also share this construct.  Within the quotation we have a single word: +.  The important thing is the quotation itself is what is put on the stack; the + word is not immediately executed.  This is exactly how we declared plus in Scala.

The final line invokes the apply word.  When this executes, it pops one value off the stack (which must be a quotation).  It then executes this quotation, giving it access to the current stack.  Since the quotation on the head of the stack consists of a single word, +, executing it will result in the next two elements being popped off (2 and 3) and the result (5) being pushed on.  Exactly the same result as the earlier example and the exact same semantics as the Scala example, but a lot more concise.

Cat also provides a number of primitive operations which perform their dirty work directly on the stack.  These operations are what make it possible to reasonably perform tasks without variables.  The most important operations are as follows:

  • swap — exchanges the top two elements on the stack.  Thus, 2 3 swap results in a stack of “3 2” in that order.
  • pop — drops the first element of the stack.
  • dup — duplicates the first element and pushes the result onto the stack.  Thus, 2 dup results in a stack of “2 2“.
  • dip — pops a quotation off the stack, temporarily removes the next item, executes the quotation against the remaining stack and then pushes the old head back on.  Thus, 2 3 1 [ + ] dip results in a stack of “5 1“.

There are other primitives, but these are the big four.  It is possible to emulate any control structure (such as if/then) just using the language shown so far.  However, to do so would be pretty ugly and not very useful.  Cat does provide some other operations to make life a little more interesting.  Most significantly: functions and conditionals.  A function is defined in the following way:

define plus { + }

Those coming from a programming background involving variables (that would be just about all of us) would probably look at this function and feel as if something is missing.  The odd part of this is there is no need to declare parameters, all operands are on the stack anyway, so there’s no need to pass anything around explicitly.  This is part of why concatenative languages are so extraordinarily concise.

Conditionals also look quite weird at first glance, but under the surface they are profoundly elegant:

2 3 plus    // invoke the `plus` function
10 <
[ 0 ]
[ 42 ]

Naturally enough, this code pushes 0 onto the stack.  The conditional for an if is just a boolean value pushed onto the stack.  On top of that value, if will expect to find two quotations, one for the “then” branch and the other for the “else” branch.  Since 5 is less than 10, the boolean value will be True.  The if function (and it could just as easily be a function) pops the quotations off of the stack as well as the boolean.  Since the value is True, it discards the second quotation and executes the first, producing 0 on the stack.

I’ll leave you with the more complicated example of the factorial function:

define fac {
  dup 0 eq
  [ pop 1 ]
  [ dup 1 - fac * ]

Note that this isn’t even the most concise way of writing this, but it does the job.  To see how, let’s look at how this will execute word-by-word (assuming an input of 4):

Stack Word

4 4

4 4 0

4 False

[ pop 1 ]
4 False [pop 1]

[ dup 1 - fac * ]
4 False [pop 1] [dup 1 - fac *]


4 4

4 4 1

4 3

fac (assume magic recursion)
4 6


The final result is 24, a value which is left on the stack.  Pretty nifty, eh?


You’ll notice this is a shorter post than I usually spew forth (no pun intended…this time).  The reason being that I want this to be fairly easy to digest.  Concatenative languages (and Cat in particular) are not all that difficult to digest.  They are a slightly different way of thinking about programming, but as we will see in the next part, not so different as it would seem.

Note: Cat is written in C# and is available under the MIT License.  Don’t fear the CLR though, Cat runs just fine under Mono.  If you really want to experiment with no risk to yourself, a Javascript interpreter is available.

Introduction to Automated Proof Verification with SASyLF


Doesn’t that title just get the blood pumping?  Proof verification has a reputation for being an inordinately academic subject.  In fact, even within scholarly (otherwise known as “unrealistically intelligent“) circles, the automated verification of proofs is known mainly as a complex, ugly and difficult task often not worth the effort.  This is a shame really, because rigorous proofs are at the very core of both mathematics and computer science.  We are nothing without logic (paraphrased contrapositive from Descartes).  Believe it or not, understanding basic proof techniques will be of tremendous aid to your cognitive process, even when working on slightly less ethereal problems such as how to get the freakin’ login page to work properly.

Well, if you made it all the way to the second paragraph, then you either believe me when I say that this is legitimately useful (and cool!) stuff, or you’re just plain bored.  Either way, read on as we commence our exciting journey into the land of rigorous proofs!

SASyLF Crash Course

If you’re at all familiar with the somewhat-specialized field of proof verification, you probably know that SASyLF (pronounced “sassy elf”) is not the most widely used tool for the job.  In fact, it may very well be the least well-known.  More commonly, proofs that require automatic verification are written in Twelf or Coq.  Both of these are fine tools and capable of a lot more than SASyLF, but they can also be extremely difficult to use.  One of the primary motivations behind SASyLF was to produce a tool which was easier to learn, had a higher level syntax (easier to read) and which gave more helpful error messages than Twelf.  The main idea behind these convolutions was to produce a tool which was more suitable for use in the classroom.

The main design decision which sets SASyLF apart from Twelf is the way in which proofs are expressed.  As I understand it, Twelf exploits Curry-Howard correspondence to represent proofs implicitly in the types of a functional program (update: this is incorrect; see below).  While this can be very powerful, it’s not the most intuitive way to think about a proof.  Eschewing this approach, SASyLF expresses proofs using unification (very similar to Prolog) and defines inference rules explicitly in a natural-language style.

There are three main components to a SASyLF proof:

  • Syntax
  • Judgments
  • Theorems/Lemmas

Intuitively enough, the syntax section is where we express the grammar for the language used throughout our proof.  This grammar is expressed very naturally using BNF, just as if we were defining the language mathematically for a hand-written proof.  Left-recursion is allowed, as is right-recursion, arbitrary symbols, ambiguity and so on.  SASyLF’s parser is mind bogglingly powerful, capable of chewing threw just about any syntax you throw at it.  The main restriction is that you cannot use parentheses, square brackets ([]), pipes (|) or periods (.) in your syntax.  The pure-untyped lambda calculus defined in SASyLF would look something like this:

t ::= fn x => t[x]
    | t t
    | x

I said we couldn’t use brackets, but that’s only because SASyLF assigns some special magic to these operators.  In a nutshell, they allow the above definition of lambda calculus to ignore all of the issues associated with variable name freshness and context.  For simplicity’s sake, that’s about as far as I’m going to go into these mysterious little thingies.

The judgments section is where we define our inference rules.  Just as if we were defining these rules by hand, the syntax has the conditionals above a line of hyphens with the conclusion below.  The label for the rule goes to the right of the “line”.  What could be more natural?

judgment eval: t -> t

t1 -> t1'
--------------- E-Beta1
t1 t2 -> t1' t2

The judgment syntax is what defines the syntax for the -> “operator”.  Once SASyLF sees this, it knows that we may define rules of the form t -> t, where t is defined by the syntax section.  Further on down, SASyLF sees our E-Beta1 rule.  Each of the tokens within this rule (aside from ->) begins with “t“.  From this, SASyLF is able to infer that we mean “a term as defined previously”.  Thus, this rule is syntactically valid according to our evaluation judgment and the syntax given above.

Of course, theorems are where you will find the real meat of any proof (I’m using the word “proof” very loosely to mean the collection of proven theorems and lemmas which indicates some fact(s) about a language).  SASyLF wouldn’t be a very complete proof verification system without support for some form of proving.  Once again, the syntax is extremely natural language, almost to the point of being overly-verbose.  A simple theorem given the rules above plus a little would be to show that values cannot evaluate:

theorem eval-value-implies-contradiction:
    forall e: t -> t'
    forall v: t value
    exists contradiction .

    _: contradiction by unproved
end theorem

Note that contradiction is not more SASyLF magic.  We can actually define what it means to have a contradiction by adding the following lines to our judgment section:

judgment absurd: contradiction

In other words, we can have a contradiction, but there are no rules which allow us to get it.  In fact, the only way to have a contradiction is to somehow get SASyLF to the point where it sees that there are no cases which satisfy some set of proven facts (given the forall assumptions).  If SASyLF cannot find any cases to satisfy some rules, it allows us to derive anything at all, including judgments which have no corresponding rules.

Readers who have yet to fall asleep will notice that I cleverly elided a portion of the “theorem” code snippet.  That’s because there wasn’t really a way to prove that contradiction given the drastically abbreviated rules given in earlier samples.  Instead of proving anything, I used a special SASyLF justification, unproved, which allows the derivation of any fact given no input (very useful for testing incomplete proofs).  Lambda calculus isn’t much more complicated than what I showed, but it does require more than just an application context rule in its evaluation semantics.  In order to get a taste for SASyLF’s proof syntax, we’re going to need to look at a much simpler language.

Case Study: Integer Comparison

For this case study, we’re going to be working with simple counting numbers which start with 0 and then proceed upwards, each value expressed as the successor of its previous value.  Thus, the logical number 3 would be s s s 0.  Not a very useful language in the real world, but much easier to deal with in the field of proof verification.  The syntax for our natural numbers looks like this:

n ::= 0
    | s n

With this humble definition for n, we can go on to define the mathematical greater-than comparison using two rules under a single judgment:

judgment gt: n > n

------- gt-one
s n > n

n1 > n2
--------- gt-more
s n1 > n2

Believe it or not, this is all we need to do in terms of definition.  The first rule says that the successor of any number is greater than that same number (3 > 2).  The second rule states that if we already have two numbers, one greater than the other (12 > 4), then the successor of the greater number will still be greater than the lesser (13 > 4).  All very intuitive, but the real question is whether or not we can prove anything with these definitions.

An Easy Lemma

For openers, we can try something reasonably simple: prove that all non-zero numbers are greater than zero.  This is such a simple proof that we won’t even bother calling it a theorem, we will give it the lesser rank of “lemma”:

lemma all-gt-zero:
    forall n
    exists s n > 0 .

    _: s n > 0 by induction on n:
        case 0 is
            _: s 0 > 0 by rule gt-one
        end case

        case s n1 is
            g: s n1 > 0 by induction hypothesis on n1
            _: s s n1 > 0 by rule gt-more on g
        end case
    end induction
end lemma

In order to prove anything about n, we first need to “pull it apart” and find out what it’s made of.  To do that, we’re going to use induction.  We could also use case analysis, but that would only work if our proof didn’t require “recursion” (we’ll get to this in a minute).  There are two cases as given by the syntax for n: when n is “0“, and when n is “s n1“, where n1 is some other number.  We must prove that s n > 0 for both of these cases individually, otherwise our proof is not valid.

The first case is easy.  When n is 0, the proof is trivial using the rule gt-one.  Notice that within this case we are no longer proving s n > 0, but rather s 0 > 0.  This is the huge win brought by SASyLF’s unification: n is0” within this case.  Anything we already know about n, we also know about 0.  When we apply the rule gt-one, SASyLF sees that we are attempting to prove s n > n where n is “0“.  This is valid by the rule, so the verification passes.

The second case is where things get interesting.  We have that n is actually s n1, but that doesn’t really get us too much closer to proving s s n1 > 0 (remember, unification).  Fortunately, we can prove that s n1 > 0 because we’re writing a lemma at this very moment which prove that.  This is like writing a function to sum all the values in a list: when the list is empty, the result is trivial; but when the list has contents, we must take the head and then add it to the sum of the tail as computed by…ourself.  Induction is literally just recursion in logic.  Interestingly enough, SASyLF is smart enough to look at all of the inductive cases in your proof and verify that they are valid.  This is sort-of the equivalent of a compiler looking at your code and telling you whether or not it will lead to an infinite loop.

To get that s n1 > 0, we use the induction hypothesis, passing n1 as the “parameter”.  However, we’re not quite done yet.  We need to prove that s s n1 > 0 in order to unify with our original target (s n > 0).  Fortunately, we already have a rule that allows us to prove the successor of a number retains its greater-than status: gt-more.

However, gt-more has a condition in our definition.  It requires that we already have some fact n1 > n2 in order to obtain s n1 > n2.  In our case, we already have this fact (s n1 > 0), but we need to “pass” it to the rule.  SASyLF allows us to do this by giving our facts labels.  In this case, we have labeled the s n1 > 0 fact as “g“.  We take this fact, pack it up and send it to gt-more and it gives us back our final goal.

A Slightly Harder Theorem

A slightly more difficult task would be to prove that the successors of two numbers preserves their greater-than relationship.  Thus, if we know that 4 > 3, we can prove that 5 > 4.  More formally:

theorem gt-implies-gt-succ:
    forall g: n1 > n2
    exists s n1 > s n2 .

    _: s n1 > s n2 by unproved
end theorem

At first glance, this looks impossible since we don’t really have a rule dealing with s n on the right-hand side of the >-sign.  We can try to prove this one step at a time to see whether or not this intuition is correct.

Almost any lemma of interest is going to require induction, so immediately we jump to inducting on the only fact we have available: g.  Note that this is different from what we had in the earlier example.  Instead of getting the different syntactic cases, we’re looking at the the rules which would have allowed the input to be constructed.  After all, whoever “called” our theorem will have needed to somehow prove that n1 > n2, it would be helpful to know what facts they used to do that.  SASyLF allows this using the case rule syntax.  We start with the easy base case:

_: s n1 > s n2 by induction on g:
    case rule
        ------------ gt-one
        _: s n2 > n2
        _: s s n2 > s n2 by rule gt-one
    end case
end induction

In this case, the term _: s n2 > n2 is unified with n1 > n2.  Thus, n1 is actually “s n2“.  This means that by unification, we are actually trying to prove s s n2 > s n2.  Fortunately, we have a rule for that.  If we let “n” be “s n2“, we can easily apply the rule gt-one to produce the desired result.

The second case is a bit trickier.  We start out by defining the case rule according to the inference rules given in the judgment section.  The only case left is gt-more, so we mindlessly copy/paste and correct the variables to suit our needs:

case rule
    g1: n11 > n2
    ------------- gt-more
    _: s n11 > n2
    _: s s n11 > s n2 by unproved
end case

In this case, n1 actually unifies with “s n11“.  This is probably the most annoying aspect of SASyLF: all of the syntax is determined by token prefix, so every number has to start with n, occasionally making proofs a little difficult to follow.

At this point, we need to derive s s n11 > s n2.  Since the left and right side of the > “operator” do not share a common sub-term, the only rule which could possibly help us is gt-more.  In order to apply this rule, we will somehow need to derive s n11 > s n2 (remember, gt-more takes a known greater-than relationship and then tells us something about how the left-successor relates to the right).  We can reflect this “bottom-up” step towards a proof in the following way:

case rule
    g1: n11 > n2
    ------------- gt-more
    _: s n11 > n2
    g: s n11 > s n2 by unproved
    _: s s n11 > s n2 by rule gt-more on g
end case

At this point, SASyLF will warn us about the unproved, but it will happily pass the rest of our theorem.  This technique for proof development is extremely handy in more complicated theorems.  The ability to find out whether or not your logic is sound even before it is complete can be very reassuring (in this way you can avoid chasing entirely down the wrong logical path).

In order to make this whole thing work, we need to somehow prove s n11 > s n2.  Fortunately, we just so happen to be working on a theorem which could prove this if we could supply n11 > n2.  This fact is conveniently available with the label of “g1“.  We feed this into the induction hypothesis to achieve our goal.  The final theorem looks like this:

theorem gt-implies-gt-succ:
    forall g: n1 > n2
    exists s n1 > s n2 .

    _: s n1 > s n2 by induction on g:
        case rule
            ------------ gt-one
            _: s n2 > n2
            _: s s n2 > s n2 by rule gt-one
        end case

        case rule
            g1: n11 > n2
            ------------- gt-more
            _: s n11 > n2
            g2: s n11 > s n2 by induction hypothesis on g1
            _: s s n11 > s n2 by rule gt-more on g2
        end case
    end induction
end theorem


I realize this was a bit of a deviation from my normal semi-practical posts, but I think it was still a journey well worth taking.  If you’re working as a serious developer in this industry, I strongly suggest that you find yourself a good formal language and/or type theory textbook (might I recommend?) and follow it through the best that you can.  The understanding of how languages are formally constructed and the mental circuits to create those proofs yourself will have a surprisingly powerful impact on your day-to-day programming.  Knowing how the properties of a language are proven provides tremendous illumination into why that language is the way it is and somtimes how it can be made better.

Credit: Examples in this post drawn rather unimaginatively from Dr. John Boyland’s excellent course in type theory.