Skip to content

Is Scala Not “Functional Enough”?


In one of Rich Hickey’s excellent presentations introducing Clojure, he mentions in passing that Scala “isn’t really a functional language”.  He says that Java and Scala are both cut from the same mold, and because Scala doesn’t force immutability it really shouldn’t qualify.  These viewpoint is something I’ve been hearing a lot of from various sources, people talking about how F# is really the only mainstream functional language, or how once Erlang takes off it will leave Scala in the dust.

When I first heard this sentiment voiced by Rich, I brushed it off as a little odd and only slightly self-serving (after all, if you don’t use Scala, there’s a better chance you will use Clojure).  Rich has his own opinions about a lot of things, but I have found with most that I can still understand his rationale, even if I don’t agree.  So, realizing that many of his other kooky ideas seemed to have some basis in reality, I decided to come back to his opinion on Scala and give it some deeper consideration.

The core of the argument made by Rich (and others) against Scala as a functional language goes something like this:

  • Mutable variables as first-class citizens of the language
  • Uncontrolled side-effects (ties in with the first point)
  • Mutable collections and other imperative libraries exist on equal footing
  • Object-oriented structures (class inheritance, overloading, etc)
  • Verbosity

Comparative Type Inference

If you’re coming from Java-land, the final point may have caught you a bit by surprise.  After all, Scala is vastly more concise than Java, so how could anyone possibly claim that it is “too verbose”?  Well, to answer that question, you have to compare Scala with the other side of the language jungle: the functional languages.  Here’s an explicitly-recursive function which sums a list of integers:

def sum(ls: List[Int]): Int = ls match {
  case hd :: tail => hd + sum(tail)
  case Nil => 0

That’s not too bad.  The use of pattern matching eliminates an entire class of runtime errors (selecting a non-existent element) and makes the code a lot cleaner than the equivalent Java.  However, compare this with the same function ported directly to SML (a functional language:

fun sum nil = 0
  | sum (hd :: tail) = hd + sum tail

One thing you’ll notice here is the complete lack of any type annotations.  Like most static functional languages, ML (and derivatives) has a form of type inference called “Hindley – Milner” (sometimes called “global type inference”).  Rather than just looking at a single expression to infer a type (like Scala), Hindley – Milner looks at the entire function and derives the most general (least restrictive) type which satisfies all expressions.  This means that everything can be statically type-checked with almost no need to declare types explicitly.

“Now, wait!” (you say), “You would never write a function just to sum a list; you should be using a fold.”  That’s true.  So let’s see how well these two languages do when the problem is solved in a more realistic fashion.  Once again, Scala first:

def sum(ls: List[Int]) = ls.foldLeft(0) { _ + _ }

Let’s see ML top that!

fun sum ls = foldl (op+) 0 ls

Then again, maybe we’ll just quit while we’re behind…

The fact is that Scala requires significantly more ceremony to accomplish some things which are trivial in pure-functional languages like ML and Haskell.  So while Scala may be a huge step up from Java and C++, it’s still a far cry from being the quickest and most readable way of expressing things in a functional style.

One obvious solution to this would be to just add Hindley – Milner type inference to Scala.  Well, this may be the “obvious” solution, but it doesn’t work.  Scala has an extremely powerful and complex type system, one with a number of properties which Hindley – Milner just can’t handle.  A full object-oriented inheritance hierarchy causes some serious problems with the “most general” inference of Hindley – Milner: just about everything becomes type Any (or close to it).  Also, method overloading can lead to ambiguities in the inferred types.  This is actually a problem even in the venerable Haskell, which imposes hard limitations on what functions can be in scope at any given point in time (so as to avoid two functions with the same name).

Simply put, Scala’s design forbids any type inference (that I know of) more sophisticated than local expression-level.  Don’t get me wrong, it’s still better than nothing, but a language with local type inference alone will never be as generally concise as a language with Hindley – Milner.

Side Effects

One big ticket item in the litany of complaints against Scala is the admission of uncontrolled side effects.  It’s not hard to find an example which demonstrates this property:

val name = readLine
println("Hello, " + name)

This example alone shows how fundamental side-effects are within the Scala language.  All we have done here is made two function calls, one of them passing a String and receiving nothing as a result.  From a mathematical standpoint, this code snippet is virtually a no-op.  However, we all know that the println function has an additional side effect which involves sending text to standard out.  Coming from Java, this makes perfect sense and it’s probably hard to see why this would be considered a problem.  However, coming from Haskell, what we just wrote was a complete abomination.

You see, Haskell says that no function should ever have side effects unless they are explicitly declared using a special type constructor.  In fact, this is one of the areas where monads have had a huge impact on Haskell’s design.  Consider the following Haskell equivalent:

main :: IO ()
main = do
         name <- getLine
         putStrLn ("Hello, " ++ name)

Even if you don’t know Haskell, the above should be pretty readable.  The first line is the type declaration for the main “function” (it’s actually a value, but why quibble).  Haskell does have Hindley – Milner type inference, but I wanted to be extra-explicit.  You’ll notice that main is not of type void or Unit or anything similar, it is actually of type IO parameterized with Haskell’s form of Unit: ().  This is an extremely important point: IO is a monad which represents an action with side-effects returning a value which matches its type parameter (in this case, ()).  The little dance we perform using do-notation is just a bit of syntax sugar allowing us to compose two other IO values together in a specific order.  The getLine “function” is of type IO String, meaning that it somehow reads a String value by using side effects (in this case, reading from standard in).  Similarly, putStrLn is a function of type String -> IO ().  This means that it takes a String as a parameter and uses it to perform some side effects, from which it obtains no result value.  The do-notation takes these two monadic values and composes them together, forming one big value of type IO ().

Now, this may seem horribly over-complicated, especially when compared to the nice clean side effects that we have in Scala, but it’s actually quite mathematically elegant.  You see, the IO monad is how we represent actions with side effects.  In fact, the only (safe) way to have side effects in Haskell is to wrap them up inside monad instantiations like these.  Haskell’s type system allows you to actually identify and control side effects so that they remain contained within discrete sections of your code base.

This may not sound so compelling, but remember that functional programming is all about eliminating side effects.  You compute your result, you don’t just accidentally find yourself with a magic value at the end of a long run.  The ability to work with side effects as packaged values just like any other constant is extremely powerful.  More importantly, it is far closer to the “true” definition of functional programming than what we have in Scala.


I hate to say it, but Rich Hickey and the others are quite right: Scala isn’t a terribly functional language.  Variables, mutable data structures, side effects and constant type declarations all seem to conspire to remove that crown from Scala’s proverbial brow.  But let’s not forget one thing: Scala wasn’t designed to be a functional language.

That may sound like heresy, but it’s true.  Scala was created primarily as an experiment in language design, specifically focusing on type systems.  This is the one area where I think Scala excels far beyond the rest of the field.  Scala makes it possible to model many problems in an abstract way and then leverage the type system to prove correctness at compile time.  This is approach is both revolutionary and an extremely natural way to solve problems.  The experience of using the type system in this fashion is a little difficult to describe (I’m still on the lookout for good examples), but trust me, you’ll like it when you see it.

Scala’s not really a functional language, and as Cedric Beaust has pointed out, it’s not really the best object-oriented language either; so what is it good for?  Scala sits in a strange middle ground between the two worlds of functional and object-oriented programming.  While this does have some disadvantages like being forced to take second place in terms of type inference, it also lets you do some really interesting stuff like build a mutable ListBuffer with constant time conversion to an immutable List, or sometimes recognize the fact that fold is not the universal solution.  It’s an experiment to be sure, but one which I think has yielded some very powerful, very useful results…just not many of a purely functional nature.


  1. Calling Scala not a functional language and F# a functional language is a bit weird.

    In particular, F# supports every single feature of Scala that could make it not that functional – object oriented features, mutable variables, mutable collections, side effects…

    Verbosity is a problem with Scala, but irrelevant to whether you consider it to be a functional language.

    The big thing Scala is lacking to be a decent functional language is full blown tail call elimination. How’s clojure doing on that? ;-)

    Of course, given that no two programmers can agree on what “Functional language” means, the question is a bit silly. But even if they could, ultimately this isn’t about ticking boxes. “Does language X support paradigm Y?” is less interesting than “Is language X good for programming in?”

    David R. MacIver Monday, October 20, 2008 at 12:58 am
  2. I agree with David. Type inference is hardly a requirement for a functional language (it would for example rule out all dynamically typed functional languages), more like a nice thing to have. Mutable state and side effects are not controlled in most functional programming languages (Haskell is an exception). Functional programming comes down to two things: higher-order functions and tail call elimination (or at least a way to emulate it). IMO Scala fulfills the requirements reasonably well.

    Jesper Nordenberg Monday, October 20, 2008 at 2:05 am
  3. “and as Cedric Beaust has pointed out, it’s not really the best object-oriented language either”

    You might want to take a look at the comments that article generated. Seems that a lot of knowledgeable people disagree strongly with him.

    Quintesse Monday, October 20, 2008 at 2:13 am
  4. Clojure has some problems Haskell hasn’t. Clojure has STM as it’s big selling point. But you can perform side effects inside a transaction. If the transaction has to be restarted the sideeffect will be perfomed many times. Haskell prevents you from doing such things.

    Matthias B Monday, October 20, 2008 at 2:22 am
  5. ML isn’t purely functional. Haskell is, though.

    Sebastian Monday, October 20, 2008 at 3:35 am
  6. Most functional programmers would probably wouldn’t bother naming the last argument to foldl:

    sum = foldl (+) 0

    While this is nice, I don’t think that conciseness is a very compelling argument for the ML-style languages. IMO the expressive and rigorous type system and clean semantics are more important.

    bdg Monday, October 20, 2008 at 4:03 am
  7. A few notes.

    SML is not a “pure-functional language”. SML is a strict impure language with mutable ref cells, mutable arrays, exception handling, and unrestricted side-effects. However, SML supports functional programming via higher-order functions, immutable bindings, and immutable data types (lists, strings, vectors, tuples, records, algebraic datatypes, …).

    There is no need to eta-expand the sum function in SML. You can just write:

    val sum = foldl op + 0

    Saying that Hindley-Milner style inference would lead just about everything to become “Any” in OO-style programming is somewhat misleading. First of all there is no exact analog of the “Any” or “Object” type that you see in many OO languages (what you get in H-M is type schemes with type variables). Also, while it is quite true that basic Hindley-Milner doesn’t directly support subtyping or inheritance, there are ways to emulate those within a Hindley-Milner style type system. MLton’s wiki ( contains some pointers to relevant papers (describing techniques for achieving dynamic dispatch, subtyping, and/or inheritance in SML).

    Vesa Karvonen Monday, October 20, 2008 at 4:16 am
  8. In my mind, Scala was not so much an experiment in type systems design, as an attempt to come up with a decent statically typed language that is both functional and object-oriented and that interoperates on standard VMs. It was very much to my surprise that it turned out that existing type systems were not good enough for this task and that something new was needed. As to combining Hindley/Milner and subtyping: In principle it’s possible, but in practice it does not work so well. The problem is not so much that you end up too often with Any, but that inferred types become impractically large. That said, there might be other approaches than local type inference to it. F#’s approach looks interesting, but I have not yet seen a precise description of how it works.

    If you try to achieve a fusion of object-oriented and functional, you sure get some objections from both camps, that’s inevitable. Interestingly, both sets of objections criticise the presence of some language feature. Cedric Beust does not like pattern matching, and Jason Hickey does not like side-effects.

    We have been arguing in our ECOOP 2007 paper “Matching Objects with Patterns” how Scala’s pattern matching satisfies the criteria of good OO programming. As to side-effects, I agree that they are a double-edged sword. I believe the right way to tame them is through an effect system. Let’s see what we can come up with.

    Martin Odersky Monday, October 20, 2008 at 4:56 am
  9. Martin,

    Thanks for your comments as always. I have been a fan of your work since the early days of pizza, and as always you express your comments in such a gentlemanly way. Is work underway on an effects system for Scala? If so, could you share a bit about it?

    uman Monday, October 20, 2008 at 6:10 am
  10. Clojure allows just as much side effecting as Scala with mutable references and uncontrolled I/O. The main thing Clojure brings to the table is that all its various mutable reference types have some notion of thread awareness. But the core truth is this: the Clojure language is just as deeply impure as the Scala language.

    Now, he’s right that Clojure’s library is more functional. Its data structures are mostly immutable while Scala has a fair number of imperative structures. But there’s nothing (other than taste) stopping me from writing mutable structures in Clojure.

    Same deal with the most of the ML family (including F#), most of the Lisp family. and in fact the vast majority of languages that people have ever called functional.

    While I’m at it, Erlang is another impure functional language with uncontrolled I/O and a mutable “process dictionary” which allows functions to depend on mutable state.

    Even GHC Haskell has unsafePerformIO and friends plus a fast-and-loose attitude towards FFI (for good reason).

    Can we declare a truce? Impure functional languages bashing each other about who is more pure is stupid when 99% of the programming world doesn’t even know it’s possible to program without uncontrolled side effects or why you might want to. Scala is not Clojure’s enemy, nor vice versa. Not yet.

    James Iry Monday, October 20, 2008 at 8:35 am
  11. I wrote:

    > Cedric Beust does not like pattern matching, and Jason Hickey does not like
    > side-effects.

    Sorry, I meant Rich Hickey, not Jason Hickey. Maybe Jason Hickey does not like side-effects too much either; that would make three of us ;-)

    Martin Odersky Monday, October 20, 2008 at 9:45 am
  12. Good comments, everyone. The general theme seems to be calling me on the carpet for not giving a reasonable definition for “functional programming”. In principle, I agree that this is a very nebulous attribute. The most generic definition would be some support for functions as values, but then languages like C/C++ and even Assembly would qualify. Personally, I tend to agree more with Erik Meijer when he says that the only functional programming languages are either Haskell or very close to it. Without the ability to control side effects using the type system, there’s almost no point in pretending to restrict them at all. :-) And yes, I know that this does rule out any dynamically typed functional languages.

    While it is true that you can create mutable data structures in Clojure, it’s not a terribly easy thing to do. You would either have to build your data structure using Java, or you would need to abuse the concept of thread-safe mutable refs. ML falls into the same camp. Generically, Erlang, ML and Clojure restrict mutable references as second-class citizens of the language. You have to go significantly out of your way to do things imperatively, and I suppose that makes them “more functional”.

    By contrast, Scala has both vals and vars with no syntactic distinction between their usage. In other words, it’s just as “easy” to dereference a variable as it is to dereference a value. While we may *conventionally* encourage the use of val over var, it is just that, an artificial convention. The language itself doesn’t restrict anything in this regard, making it really no more functional than C#. To be clear, I don’t really see this as a problem so much as an interesting property worthy of notice.

    The points about type inference were targeted at folks like Tony Morris who are *constantly* drawing negative comparisons between the type systems and syntax of Scala and Haskell. Nothing against Tony (or anyone else who has made this comparison), but it did seem worthy of address. It’s a valid point, saying that Scala is more verbose than languages like Haskell and ML. Does that verbosity *in isolation* make Scala any less of a functional language? Not really, but somehow the arguments are perpetually raised in conjunction to one another.

    As to Cedric’s (now infamous) post, I don’t particularly agree with it either. :-) I love the way that pattern matching in Scala manages to implement deconstruction without breaking encapsulation. However, if you think about the concept from the perspective of one of those wild eyed object-oriented nuts who use no more than two data members per class and no more than one level of nesting per method, it’s a little impure. Most people just use pattern matching through case classes, in which case it *is* breaking encapsulation in some respect. No more than getters and setters, but it’s still worth thinking about.


    A truce it is. :-) As I have said on many ocaisions, the blending of functional and object-oriented paradigms has yielded some surprisingly powerful results. Scala may not be as pure as Haskell, but that’s not a detraction in my eyes.

    Daniel Spiewak Monday, October 20, 2008 at 1:10 pm
  13. Hi Daniel,
    Scala, F# and Clojure all have uncontrolled side-effects. I object to calling these languages “functional” for that reason, but I sometimes adopt the degenerate definition. In this case, all three languages above are as functional as each other. So I’m not sure why you single out Scala. Indeed, the implications of controlling side-effects are quite vast and mind-boggling, so I expect that Rich Hickey has made a mistake.

    With respect to your pattern matching and fold, your pattern matching is a right-fold running in O(n) stack. Indeed, if you look at the source code for foldRight and your source for sum, you will observe repetition. The fact that you can resort to a higher-order abstraction is exactly the point – the syntax comes a far away second place to the importance here. So while Scala’s syntax can be painful (believe me, I’ve pushed the language to its limits), this fact is not as important as that which occurs without side-effects – since otherwise, you wouldn’t have been able to perform expression substitution as you did (knowingly or not).

    The cited Cedric Beust article is nonsense and has been debunked over and over, so I’m not sure why you referenced it unless you want to debunk it again. I don’t intend to be so harsh on a particular individual, but these are the facts – Cedric is way out of his depth on that article.

    Finally, I’d like to see you support the claim that “Tony Morris who are *constantly* drawing negative comparisons between the type systems and syntax of Scala and Haskell”, since I submit that it is wildly false. Indeed, I write more Scala than Haskell in my job and I accept its limitations – I simply don’t trick myself into believing they don’t exist. Where do I constantly make these claims? I think you are exaggerating.

    Haskell, Scala, F#, Clojure, Whatever. These are *not* religions with members of one apostatising from cult to another. It is OK to say that one (or a subset of one) is more or less useful than another without attracting the wrath of offending the culture. Why not observe such a claim for its potential truth value instead?

    If I make a mistake in my logic, then it is my problem. If I have offended what you like or dislike, then that is your problem.

    Have a nice day :)

    Tony Morris Monday, October 20, 2008 at 3:22 pm
  14. @Tony

    :-) Perhaps “constantly drawing negative comparisons” was too strong of a phrase. I was referring to the many times on your blog where you have expressed frustration at having to annotate types under certain conditions in Scala (often mentioning that Haskell would not require it), or the times where you have complained about some restrictions in the expressiveness of Scala’s type system (again comparing to Haskell). I have yet to find a case where your complaints were *wrong* or even offensive, I was merely pointing out the fact that they happened. If anything, I’m *glad* that you raised these objections on your blog since it led me to question my own opinions about the expressiveness of Scala’s type system and just how far it could be pushed. You have neither offended me nor have you employed faulty logic.

    I am aware of the differences between right- and left-fold. I chose to ignore them in the article for the sake of cleanliness. While it is not at all difficult to write a left-fold by hand, it does require a bit more ceremony. Rather than obscure the point through endless syntax, I chose to silently switch between the two operations. The use of foldLeft/foldl in the reduced examples was part of my ongoing silent campaign to encourage the use of these functions over their non-tail-recursive analogues. There are comparatively few real-world algorithms which require foldRight over foldLeft, and the latter is far more efficient than the former.

    I cited Cedric merely for ironic contrast. The fact that some people are claiming that Scala isn’t functional enough while others are claiming that it isn’t object-oriented enough makes for an amusing dichotomy. Whether or not Cedric’s central premise was correct is beside the point.

    Daniel Spiewak Monday, October 20, 2008 at 3:35 pm
  15. If it’s a question of definition then the phrase “functional programming language” was first applied to Lisp, which has been impure from the beginning. If you want to redefine the phrase, then just say so at the top “I don’t think any languages are functional except Haskell and Clean and such – so neither Scala nor Clojure is functional.” I have yet to hear Erik try to redefine the phrase that way, although he might have. All I’ve been hearing is Erik trying to use the phrase “fundamentalist functional programming” to talk about Haskell style FP. Everybody else uses “pure functional programming.”

    Also, you’re abusing the phrase “first class.” First class-ness has to do with what you can and can’t do with something at runtime – it’s a semantic concept, not a syntactic concept. Haskell functions are first class, but type classes are not. The former can be used as values, the later are definitely not values. Nonetheless, type classes are really easy to do in Haskell. In Scala traits, classes and methods are not first class, but objects are. By that common meaning, ML’s mutable reference cells are more first class than Scala’s. You can’t pass Scala mutable reference cells as arguments, for instance, but you can in ML. Same deal with Clojure’s refs (though vars are are different).

    Now, I know what you’re trying to say, but it’s still wrong. For purposes of analyzing code, shared mutable variables are shared mutable variables. It doesn’t matter if you write “val x = ref 3″ (ML) or “var x = 3″ (Scala) or “(def x 3)/(let [x (ref 3)]” (Clojure). It doesn’t matter if you then write “x := 2″ (ML) “x = 2″ (Scala) or “(set! x 2)/(set-ref x 2)” (Clojure). Nor does it matter if you finish by writing “let y = !x in …” (ML), “val y = x” (Scala) or “(let [y x] …)/(let [y (deref x)] ..” (Clojure). Of course, with all the Clojure ref stuff, be sure to wrap in a dosync :-) . Also, I could do the same thing with Clojure agents, but I think the point isn’t changed too much.

    It’s very easy to write mutable data structures in ML. Here’s a paper on graph library with both imperative and immutable variants: ML has a rich tradition of preferring immutable structures but using imperative ones where needed. By the same token, I have no reason to believe that mutable data structures in Clojure would be significantly harder than they are in Scheme. The difference is that in Scheme all your data structures are mutable whether you want them to be or not.

    Finally, C does not have functions as values. C does have function pointers as values, but that’s a significantly weaker concept. It means you can reference existing functions, but you can’t create new functions at runtime (at least not without mallocing a block, punching machine code into it, and returning a pointer to that block – but that kind of behavior is undefined by ANSI C). To prove to yourself that functions aren’t values in C just try to write “compose” in C. Please no macro-fu, because if you do that then I’ll ask you to write a fix point combinator.

    James Iry Monday, October 20, 2008 at 3:36 pm
  16. @James

    I think your definition of “first-class” is too narrow. :-) While I would agree that it is a primarily semantic attribute, there is no reason not to overload the term with respect to syntax. For example: Scala has first-class list comprehensions.

    I will certainly agree that from a high-level, language analysis standpoint, mutable refs are mutable refs, the syntax doesn’t particularly matter. However, as people have pointed out here (and elsewhere), sometimes the “feel” of a language is more important in defining it than its actual semantics. In both ML and Clojure, there is an obvious syntactic distinction of usage between mutable and immutable references. You can’t just switch between them willy-nilly, it some consideration and planning. This contrasts with Scala which allows you to just `sed s/val/var/g` without a second thought.

    I would be interested to see an example of a simple mutable data structure in Clojure (say: a doubly-linked list). I’m almost positive that it will be quite different than the equivalent structure in Scheme. More importantly, it will be quite obvious at both declaration-point *and* use-point for the references that they are indeed mutable. In Scala, just looking at the code `42 + a` doesn’t tell you anything about `a` in terms of val vs var. I don’t think this is a bad thing, but it is worth noticing.

    Daniel Spiewak Monday, October 20, 2008 at 3:55 pm
  17. It’s not my definition:

    James Iry Monday, October 20, 2008 at 4:06 pm
  18. Well, I don’t like it, but I’ll accept it. :-) You know what I meant though when I said it, so I guess communication was accomplished.

    Daniel Spiewak Monday, October 20, 2008 at 4:20 pm
  19. For me, the real question would be: “Who cares whether Scala is a true functional language or not? Can you get the job done with it? In a better -easier to code and maintain- way than other FL or OOL?”

    I personally don’t care that Scala is not fully functional. It seems an interesting language with which I can work quite easily (disclaimer: I’m still learning it now). Its concepts are interesting and, coming from Java, I found it much easier to approach (and read existing sample code) than other FL.

    I think it was not really necessary to bring up such a debate (sorry to say so Daniel, I generally appreciate your blog but I find this post quite useless).

    Jean-Francois Poilpret Monday, October 20, 2008 at 5:15 pm
  20. Daniel, I didn’t build a complete answer to your challenge yet. I’m too lazy. Instead I thought I’d write a basis for Scheme style structures. As you know, the old school Lisp way of building data structures is to base everything on the cons pair.

    (defn rcons [car cdr]
    (list (ref car) (ref cdr)))

    Now, you need to be able to read back out of that pair and the classic lispy way to do that is with car and cdr

    (defn rcar [cons]
    (deref (first cons)))

    (defn rcdr [cons]
    (deref (first (pop cons))))

    But of course you need to be able to mutate a cons pair to make mutable structures, so you need set-car! and set-cdr!

    (defn rset-car! [cons new-car]
    (ref-set (first cons) new-car)))

    (defn rset-cdr! [cons new-cdr]
    (ref-set (first (pop cons)) new-cdr)))

    Let’s make sure this actually works
    user=> (def x (rcons 1 2))
    user=> (rcar x)
    user=> (rcdr x)
    user=> (rset-car! x 42)
    user=> (rset-cdr! x 76)
    user=> (rcar x)
    user=> (rcdr x)

    Ta-da. You can take some Scheme mutable data structure code, put an ‘r’ in front of each cons,car,cdr,set-car!,set-cdr! and go to town. The fact that it’s “transactional” is invisible to the client code and in fact rather useless as each transaction protects only a single mutable reference.

    Have fun.

    James Iry Monday, October 20, 2008 at 10:15 pm
  21. Hmm. That is pretty reasonable. My knowledge of Clojure is limited, but it looks at least in the ballpark of what I expected (from a Clojure implementation; obviously this is Scheme and thus, a little different). Is it possible that you were actually right all along? ;-)

    Out of curiosity, how often is this sort of thing done in Scheme? Uncontrolled mutable state is discouraged both in primitive syntax and by convention. Obviously you can side-step that a little bit when creating mutable data structures, but how often do people actually fight the language in this way?

    Daniel Spiewak Monday, October 20, 2008 at 10:50 pm
  22. @James:
    Not that it it’s important for this discussion, but it’s quite trivial to implement a compose function in C. Simply pass a context object as a void* together with your function pointer. This is a “function object” which combines functionality and data, basically it’s same mechanism used in any functional programming language where values in the context is automatically combined with the function code. There is no code created at runtime, only data. Function values and objects are the same concept :)

    Of course the C type system is not powerful enough to describe polymorphic functions, but that’s another issue.

    Jesper Nordenberg Tuesday, October 21, 2008 at 1:16 am
  23. @Daniel,

    set!, set-car!, and set-cdr! are part of the official Scheme standard. So you’re not fighting the language to write mutable structures. In fact, all your structures are mutable whether you want them to be or not. However, by convention Schemer’s prefer to avoid mutation and will mostly treat structures immutably. Some in the Scheme community are aware of the challenges that immutability by convention brings and the PLT Scheme guys have an experiment running where cons pairs are immutable:


    What you describe is the standard encoding of closures when compiling to C. I should have been more precise on my requirements on compose. Write a function that takes two function pointers and returns a pointer to the function that is the composition of the two. For simplicity we can assume that all the functions are int to int.

    We want

    typedef int(*funcPtr)(int)

    funcPtr compose (funcPtr f, funcPtr g) {


    Such that

    compose(f,g)(x) == f(g(x))

    James Iry Tuesday, October 21, 2008 at 5:31 am
  24. James:

    Of course you can define such a thing.

    static funcPtr theFirstFunction;
    static funcPtr theSecondFunction;

    int theComposedFunction(int x){
    return theFirstFunction(theSecondFunction(x))

    funcPtr compose (funcPtr f, funcPtr g) {
    theFirstFunction = f
    theSecondFunction = g
    return *theComposedFunction

    (Or something like that. I always forget the details of C)

    What do you mean “re-entrant”?

    David R. MacIver Tuesday, October 21, 2008 at 1:36 pm
  25. Good point! Re-entrancy is for academics, real programmers use global variables.

    James Iry Tuesday, October 21, 2008 at 4:40 pm
  26. *LOL*
    Sorry i couldn’t resist but the last comment seemed subtly funny to me and also reminds me on the cobol code i have to read from time to time…

    JavaNovice Wednesday, October 22, 2008 at 9:13 am
  27. @James

    I found this thread only recently, so sorry to chime in late.

    >The fact that it’s “transactional” is invisible to the client code and in fact rather useless as each transaction protects only a single mutable reference.

    The ‘transactionality’ of these atomic functions is not useless at all. If some other thread were using the same ref x, it could make transactional read-modify-write decisions without being corrupted by the actions of these functions. Also, while it’s not idiomatic to write such fine grained transactions, a set of calls to these functions could be placed in an enclosing transaction and would join it, thus allowing the composition of more coarse-grained transactions.

    Rich Hickey Friday, November 28, 2008 at 7:20 am
  28. Rich, I get the point you’re making about wrapping in a larger transaction. Fair enough.

    But I don’t see the point you’re making about the utility of transactionally protecting a single read or single write to a ref as compared to not using transactions at all. The JVM already guarantees atomicity of reads and writes to references.

    I could write the following in Java

    class Ref[T] {
    private T x;
    public Ref(T x) {this.x = x;}
    def get {return x;}
    def set(T newX){x = newX;}

    If multiple threads bang against one instance of this I should get the same guarantee as multiple threads doing ultra-fine grained transactions in Clojure. The pointer would never be corrupted, but a thread that does two quick reads in succession or a write and then an immediate read would have no promise of seeing the same value twice.

    That’s not to bash Clojure or STM in general. It’s just the nature of transactions: they only give you as much consistency as you allow them to.

    James Iry Friday, November 28, 2008 at 7:56 am
  29. @James

    I understand the atomic consistency of references. The value isn’t for the threads doing the fine-grained reads and writes, it’s for the concurrent coarser-grained transactions manipulating those same refs, which are able to see a snapshot of the world for their duration strictly because the fine-grained activity also participates in the transaction system.

    Obviously, unconditional atomic overwrites of shared refs have limited utility in a concurrent program, but the above guarantee matters, especially when additional concurrent use is added later, as it so often is.

    Rich Hickey Friday, November 28, 2008 at 8:50 am
  30. Okay, I see. We’re talking about two different things. Let me amend. Ultra-fine grained transactions as shown in the rcar, rcdr, rset-car! and rset-cdr! examples are useless if that’s all you’re using – the system becomes in effect transactionless. But, as you point out, if you use more reasonable transaction granularity as well as the ultra-fine stuff then there is some utility.

    James Iry Friday, November 28, 2008 at 10:27 am
  31. From a practical perspective, Haskell isn’t a functional language: lazy evaluation (thunks) produces undesired side-effects. Every value in a standard Haskell is a *mutable* thunk.

    ArtemGr Friday, January 22, 2010 at 6:42 am
  32. ArtemGr, Haskell’s non-strict semantics are not a source of impurity. Barring unsafePerformIO* and friends (which is a source of impurity) and divergence (which is arguably a source of impurity, depending on how you look at it) you cannot write a Haskell program that behaves differently depending on whether an expression has been evaluated or not.

    James Iry Friday, January 22, 2010 at 10:43 am
  33. James- I think you can indeed write a Haskell program that behaves differently depending on whether or not an expression has been evaluated. This is a problem with hGetContents: its behavior depends on whether the result is evaluated before or after the corresponding file handle is closed. So

    printsAll :: FilePath -> IO String
    printsAll f = do
    will print the entire contents of a file, but

    printsNothing :: FilePath -> IO String
    printsNothing f = do
    doesn’t print anything. This is entirely because of an interaction between Haskell’s non-strict semantics and evaluation order. I think if you force evaluation of s using deepSeq before closing the file, it does print the contents.

    John Wednesday, April 14, 2010 at 1:43 pm
  34. As a mathematician recently hooked to Functional Languages, what I feel is the real question is whether Scala has ACCIDENTAL side effects – I don’t see the complaints about programs allowed to use features with side effects. One has the full power of functional programming if it is possible to write code (except perhaps IO) using constructs without side effects.

    Siddhartha Gadgil Saturday, July 23, 2011 at 8:18 pm

Post a Comment

Comments are automatically formatted. Markup are either stripped or will cause large blocks of text to be eaten, depending on the phase of the moon. Code snippets should be wrapped in <pre>...</pre> tags. Indentation within pre tags will be preserved, and most instances of "<" and ">" will work without a problem.

Please note that first-time commenters are moderated, so don't panic if your comment doesn't appear immediately.