Skip to content

Universal Type Inference is a Bad Thing


Recently, Scala has been wowing developers with its concise syntax and powerful capabilities, but perhaps its most impressive feature is local type inference.  When the intended type for an element is obvious, the Scala compiler is able to make the inference and there is no need for any additional type annotations. 

While this is extremely useful, it falls quite a bit short of the type inference mechanisms available in other languages such as Haskell and ML.  Many people have pointed out that Scala doesn’t go as far as it could in its inference, forcing the use of type annotations when the type could easily be inferred by the compiler.  To understand this claim, it’s necessary to consider an example from a language which does have such universal type inference.  Consider the following function:

fun sum nil init = init 
 | sum (hd::tail) init = hd + (sum tail init)

For those of you not familiar with ML or its derivatives, this is a simple curried function which traverses a given list, adding the values together with an initial value given by init.  Obviously, it’s a contrived example since we could easily accomplish the above using a fold, but bear with me. 

The function has the following type signature: int list -> int -> int.  That is to say, the ML compiler is able to infer the only possible type which satisfies this function.  At no point do we actually annotate any specific nominal type.  Clarity aside, this seems like a pretty nifty language feature.  The ML compiler actually considers the whole function when inferring type, so its inferences can be that much more comprehensive.  Scala of course has type inference, but far less universal.  Consider the equivalent function to the above:

def sum(list:List[Int])(init:Int):Int = list match {
    case hd::tail => hd + sum(tail)(init)
    case Nil => init

Obviously, this is a lot more verbose than the ML example.  Scala’s type inference mechanism is local only, which means that it considers the inference on an expression-by-expression basis.  Thus, the compiler has no way of inferring the type of the list parameter since it doesn’t consider the full context of the method.  Note that the compiler really couldn’t say much about this method even if it did have universal type inference because (unlike ML) Scala allows overloaded methods and operators.

Because of the relative verbosity of the Scala example when compared to ML, it’s tempting to claim that ML’s type inference is superior.  But while ML’s type inference may be more powerful than Scala’s, I think it is simultaneously more dangerous and less useful.  Let’s assume that I was trying to write the sum function from above, but I accidentally swapped the + operator for a :: on the second line:

fun sum nil init = init 
 | sum (hd::tail) init = hd :: (sum tail init)

The ML compiler is perfectly happy with this function.  Such a small distinction between the two, but in the case of the second (incorrect) function, the type signature will be inferred as the following: 'a list -> 'a list -> 'a list (the 'a notation is equivalent to the mathematical ).  We’ve gone from a curried function taking an int list and an int returning an int value to a function taking two list values with arbitrary element types and returning a new list with the same type; and all we did was change two characters.

By contrast, the same mistake in Scala will lead to a compiler error:

def sum(list:List[Int])(init:Int):Int = list match {
    case hd::tail => hd :: sum(tail)(init)    // error
    case Nil => init

The error given will be due to the fact that the right-associative cons operator (::) is not defined for type Int.  In short, the compiler has figured out that we screwed up somewhere and throws an error.  This is very important, especially for more complicated functions.  I can’t tell you how many times I’ve sat and stared at a relatively short function in ML for literally hours before figuring out that the problem was in a simple typo, confusing the type inference.  Of course, ML does allow type annotations just like Scala, but it’s considered to be better practice to just allow the compiler to infer things for itself.

ML’s type inference ensures consistency, while Scala’s type inference ensures correctness.  Obviously, “correctness” is a word which gets bandied about with horrific flippancy, but I think in this case it’s merited.  The only thing that the ML type inference will guarantee is that all of your types match.  It will look through your function and ensure that everything is internally consistent.  Since both the correct and incorrect versions of our sum function are consistent, ML is fine with the result.  Scala on the other hand is more restrictive, which leads to better assurance at compile-time that what you just did was the “right thing”.  I would argue that it’s part of the responsibility of the type checker to catch typos such as the one in the example, but languages with universal type inference just can’t do this.

Type inference can be an incredibly nice feature, and it is very tempting to just assume that “more is better” and jump whole-heartedly into languages like ML.  The problem is languages which have such universal type inference don’t provide the same safety that languages with local or no type inference can provide.  It’s just too easy to make a mistake and then not realize it until the compiler throws some (apparently) unrelated error in a completely different section of the code.  In some sense, type inference weakens the type system by no longer providing the same assurance about a block of code.  It’s important to realize where to draw the line as a language designer; to realize how far is too far, and when to step back.


  1. I don’t buy it. The first time you tried to use the result of a call to sum as an int, the ML compiler would complain just as loudly as the Scala compiler. Presumably, you wrote it to use it, so you’ll use it sooner rather than later. (And if you’re doing it right, you’ll probably notice instantly after you write it, since you’ll run it in the interpreter and notice that A List is Not An Int.)

    In sum, you claim that “languages with universal type inference just can’t [catch typos],” but I don’t find this to be true in my experiences with Haskell.

    Daniel Wagner Wednesday, April 9, 2008 at 1:11 am
  2. Could you write a type signature for the ML function if you wanted? It is possible in Haskell and it is common practice to write type signatures for top level functions. However it is nice not to have to write them for local function definitions.

    tibbe Wednesday, April 9, 2008 at 1:14 am
  3. I am not quite sure I buy your argument. I have also come across ML code that was unlegible because the author picked single identifier names and left out all type annotations. But the fact is that one can write type annotations in ML. So then it becomes a matter of policy vs enforcement and one can always argue endlesslty about that.

    The reason Scala does not have Hindley/Milner type inference is that it is very difficult to combine with features such as overloading (the ad-hoc variant, not type classes), record selection, and subtyping. I’m not saying impossible — there exist a number of extensions that incorporate these features; in fact I have been guitly of some of them myself. I’m just saying it’s very difficult to make this work well in practice, where one needs to have small type expressions, and good error messages. It’s not a shut case either — many researchers are working on pushing the boundaries here (look for instance at Remy’s MLF). But right now it is a tradeoff of better type inferencing vs better support for these features. You can make the tradeoff both ways. The fact that we wanted to integrate with Java tipped the scales in favor of subtyping and away from Hindley/Milner.

    Martin Odersky Wednesday, April 9, 2008 at 2:08 am
  4. This is an unfair comparison. In Haskell or ML you can annotate your top level functions. And indeed, often this is done, mostly for documentation purposes as well as localizing type-errors more quickly.

    You annotated the top-level of Scala but did not do so with the ML version. You concretely gave more information to the Scala compiler than the ML compiler.

    In Haskell, since this is the language I’m currently most familiar with, I would annotate it as follows. Note that typically accumulator-based recursion is not used in Haskell, but I will do so now to keep the changes minimal. Note also that this function works for any number type, not just ints, thereby making it more flexible than the Scala version. I renamed the function as the function ’sum’ already exists.

    mysum :: (Num a) => [a] -> a -> a
    mysum [] init = init
    mysum (x:xs) init = x + (sum xs init)

    If we now introduce the error you introduced:
    mysum :: (Num a) => [a] -> a -> a
    mysum [] init = init
    mysum (x:xs) init = x:(sum xs init)

    You will now get a compiler error on the third line.

    Christophe Poucet (v Wednesday, April 9, 2008 at 3:09 am
  5. I think a big problem in Scala would be subtyping.

    Imagine you’d write a (stupid) function to concatenate two lists:

    def concat(xs, ys) = xs match {
    case nil => ys
    case hd::tl => hd::concat(tl, ys)

    What’s the type of the function? Is it

    def concat[T](xs:List[T], ys:List[T]):List[T] = xs match {
    case nil => ys
    case hd::tl => hd::concat(tl, ys)

    Without subtyping this might be the right type, but with subtyping you might prefere something like this:

    def concat[T, U ys
    case hd::tl => hd::concat(tl, ys)

    This might be the right type, but the type inferencer could as well infere this:

    def concat[T, U ys
    case hd::tl => hd::concat(tl, ys)

    This is a valid type for the function, but IMO has no advantage over the previous one. But perhaps this is the type thet will be infered. And now imagine a more complicated function than this very simple one. Especialy in combination with higher order functions you might get uncomprehensably complex types.

    matthiasB Wednesday, April 9, 2008 at 4:04 am
  6. Your example is terribly contrived. The job of the type system is to check that types are consistent. It’s exceedingly unlikely that your function would exist in a vacuum where its type would never be composed with some expression that uses it. A simple property like (sum [0] 0 == 0) would suffice. And besides, nothing stops you from putting a type declaration for the ML function.

    Careful about throwing out terms like “Bad Thing”. Bad for whom, and to what end? A perfectly valid reply to your subject line would be: “No it isn’t.” It’s actually quite useful.

    Runar Wednesday, April 9, 2008 at 6:39 am
  7. Stupid HTML that messed up all the <s

    matthiasB Wednesday, April 9, 2008 at 8:57 am
  8. The example is pretty contrived, but I’ve written ML code in the past which has had similar (but far more complex) issues caused by over-aggressive type inference. The problem is that such a universal type inference system often separates the error from the root cause, leading to confusion down the line.

    The point has been raised that ML (and Haskell) allows you to explicitly annotate types in the function declaration. So my example could be redeclared like this:

    fun sum nil init:int = init
    | sum (hd::tail) init = hd + (sum tail init)

    This simple change is sufficient to avoid all of the issues with cons vs plus in the typo. However, this is not really type inference anymore. Certainly the compiler is inferring a number of types in this expression, but it is no longer inferring *everything*. In both my ML and my Scala examples, I used the *least* type annotations possible to still allow things to compile. Thus, in both examples I relied as heavily as possible on type inference in the respective languages. By annotating a type, you’re effectively overriding the type inference and giving it explicit information to draw on.

    I suppose type annotations in ML really are a matter of policy enforcement as much as any other convention is. Hindley/Milner is a very powerful form of type inference which allows a lot of bad code (like my contrived example). It is a bit of a subjective thing, but I tend to believe that a language should not be *too* enabling. It’s like static vs dynamic typing. Dynamic type systems are inarguably more flexible due to the removal of static constraints, but are they really better in a real-world application? You can certainly make policies that stipulate documentation and “good design” as it relates to potential abuse of the dynamic type system (e.g. runaway duck typing), but the potential still remains for something very bad to happen. ML allows many dubious practices (such as not annotating function params etc), and when the compiler doesn’t enforce something, lazy programmers tend to take advantage of the loophole.

    As a matter of record, ML doesn’t require you to type everything into the interpreter. There are tools such as Compilation Manager which allow the compilation of such ML functions without actually spitting out the type signature at compile-time. It is quite conceivable that a developer could write a function like my example, compile it using a tool such as CM, and never realize the issue until some point later in the code when he attempted to use the function. Once again, the error is separated from the cause.


    I’m personally quite pleased with Scala’s type inference and really it’s type system in general. There are certainly times when I *wish* it had more universal type inferencing, but in general I prefer it just the way it is. The notable exception of course being recursive methods. :-)

    As to MLF, I was actually pointed that direction shortly after I wrote this article by a professor in type theory with whom I correspond. Remy’s premise seems to be very similar to the point I was trying to make in this article, except he did more than just point out the problem and actually developed a complete language extension to fix the “problem”.

    Daniel Spiewak Wednesday, April 9, 2008 at 12:28 pm
  9. I can’t agree with this at all. As soon as you try to _call_ your ML function, you’ll get an error message telling you that it doesn’t have the type you think it does. If you’ve messed up the calling function too, maybe your problem will be a bit harder, but it’s still just a matter of learning when to query the type of something while debugging. And learning where explicit type declarations make sense – at fixed interface boundaries. It’s more than worth it to avoid the duplication and change-friction of defining your internal types too early.

    Greg M Wednesday, April 9, 2008 at 8:47 pm
  10. I also think that the example is contrived, but mainly because you presuppose that the programmer never looks at the inferred type to see if it makes any sense.

    My larger objection is to your assertion that partial annotation somehow isn’t “real” type inference. This seems silly. The purpose of type inference isn’t to prevent you from writing types. The purpose is to help get the types cross-checked and confirmed. How much of the automation you choose to use is completely up to you. None of this absolves you from looking at the answes.

    Annotations are not a panacea either — and particularly not in a language having implicit coercions. I cannot tell you how many bugs in the SVR4 kernel implementation were due to disagreements between two structures concerning which integer size to use. All the annotations were there, but the errors were not caught automatically. Needless to say these were not easy to track down.

    I suppose the thing to say here is that all languages have shortcomings. The test of ML, in part, is that the types of errors you decry are rarely observed in the wild.

    Jonathan Shapiro Wednesday, November 5, 2008 at 11:56 pm
  11. I also disagree. This is contrived because you never use the function (which would catch the error). However, I agree that you should normally put on type annotations. The nice thing about making them optional is that it allows you to annotate application/library code for documentation and error checking, while still allowing you to quickly write small scripts without annotations (giving it a dynamic language like feel).

    Zifre Wednesday, July 15, 2009 at 4:28 pm
  12. @Zifre

    The real problem, which I didn’t directly address in the article, is the fact that Hindley-Milner-style type inference removes the type error from its origin, sometimes quite far from the origin. As you say, we would immediately see the problem with my contrived example the moment we tried to use the function. However, who’s to say that we’ll be able to easily track down what’s going on? For example, suppose that we used the function in the following way:

    fun contrived nil n = n + 42
      | contrived (_::_) n = sum [1, 2, 3] n

    This produces the following error:

    stdIn:1.5-5.41 Error: types of rules don't agree [literal]
      earlier rule(s): 'Z list * int -> int
      this rule: 'Z list * int list -> int list
      in rule:
        (_ :: _,n) => (sum (1 :: <exp> :: <exp>)) n

    Sure, we’re finding the problem eventually, but I dare you to track that one down without squinting hard, particularly if the definition of `sum` is in a different module.

    This example *is* contrived. However, I have run into very similar issues when trying to write real code, and that’s what bothers me. If people aren’t religious about annotating their types — and, unlike Haskell, ML doesn’t make a point of encouraging this — it is very easy to dig yourself into a very confusing and hard-to-trace error just by virtue of repeated false inferences.

    Daniel Spiewak Wednesday, July 15, 2009 at 5:57 pm
  13. OCaml has a much better type inference than ML, because, among others, it uses the operator +. -. *. and /. for floats as opposed to + – * and / for integers.

    And of cource, you can add the types explicitly anywhere to make finding errors easier.

    obiwankenobi Saturday, December 12, 2009 at 2:01 pm
  14. Unfortunately, the realm of programming errors introduced by the problems inherent in subtyping in the Java class paradigm which Scala has adopted are much more intractable than the potential problems with ML type inference that you have raised.

    Nate Wednesday, August 31, 2011 at 12:11 pm
  15. I believe you mean the problem is with algorithms W and M, not the Hindley-Milner type system. There have been attempts to make the error messages closer to the origin. I believe an example is in Generalizing Hindley-Milner Type Inference Algorithms.

    Andy Thursday, December 8, 2011 at 3:18 pm
  16. @daniel

    One of the things that Swaroop Sridhar worked on pretty carefully in the BitC compiler was reporting the cause of type errors comprehensibly. The root problem isn’t W or M per se; it’s the impact of unification. When unification is used, you often get two types arriving from two different places that fail to unify in some third place. In order to understand what happened, the compiler needs to tell you a story about how the types got where they are and what happened along the way. Preserving the necessary information to do this isn’t at all hard. The problem, in my view, is that research compiler authors often can’t be bothered to deal with such mundane issues.

    Jonathan Shapiro Tuesday, December 27, 2011 at 2:00 am

Post a Comment

Comments are automatically formatted. Markup are either stripped or will cause large blocks of text to be eaten, depending on the phase of the moon. Code snippets should be wrapped in <pre>...</pre> tags. Indentation within pre tags will be preserved, and most instances of "<" and ">" will work without a problem.

Please note that first-time commenters are moderated, so don't panic if your comment doesn't appear immediately.