Skip navigation
ANNOUNCEMENT: community.oracle.com is currently Read only due to planned upgrade until 29-Sep-2020 9:30 AM Pacific Time. Any changes made during Read only mode will be lost and will need to be re-entered when the application is back read/write.

When I see a language specification  like this, I run for the hills. I am a firm believer in specificationsand multiple implementations. Here's a case in point. I put together an example for implicit conversions in my upcoming “Scala for the Impatient” book. 

object FractionConversions {
  implicit def int2Fraction(n: Int) = new Fraction(n, 1)
  implicit def fraction2Double(f: Fraction) = f.num * 1.0 / f.den
}

class Fraction(n: Int, d: Int) {
  val num: Int = if (d == 0) 1 else n / gcd(n, d);
  val den: Int = if (d == 0) 0 else d / gcd(n, d);
  private def gcd(a: Int, b: Int): Int = if (b == 0) a else gcd(b, a % b)
  override def toString = num + "/" + den
  def *(other: Fraction) = new Fraction(num * other.num, den * other.den)
  // other operators...
}

One always worries when there are too many conversions in and out of a particular type. If you translate the code above to C++, with two conversions

Fraction::Fraction(int n)
operator Fraction::double() const

you run into ambiguities that makes the class essentially impossible to use.

Of course, in Scala, you can turn off unhelpful conversions, by only importing the ones you want:

import FractionConversions.int2Fraction

or excluding the ones you don't want:

import FractionConversions.{fraction2Double => _, _}

I also found reassurance in the following quote from the Odersky/Spoon/Venners book:

An implicit conversion is only inserted if there is no other possible conversion to insert. If the compiler has two options to fix x * y, say using either convert1(x) * y or convert2(x) * y, then it will report an error and refuse to choose between them. It would be possible to define some kind of “best match” rule that prefers some conversions over others. However, such choices lead to really obscure code. Imagine the compiler chooses convert2, but you are new to the file and are only aware of convert1—you could spend a lot of time thinking a different conversion had been applied!

That's great. And it is technically, literally true. But it doesn't mean you won't ever spend a lot of time thinking which conversion has been applied. Look at this:

val f = new Fraction(3, 4)
f * 5

What is it?

f * int2Fraction(5)

or

fraction2Double(f) * 5

It could be either, right? So it's ambiguous. So it won't compile, right? Except it does.

The compiler sees a * method applied to aFraction. The parameter type is wrong, but it can be patched up, so that's what it does: f * int2Fraction(5)

Now look at the opposite:

5 * f

The compiler sees a * method applied to anInt. The parameter type is wrong, but it can be patched up, so that's what it does: 5 * fractionToDouble(f).

How can I make it ambiguous? Like this, surely:

def mul(a: Double, b: Double) = a * b
def mul(a: Fraction, b: Fraction) = a * b

Nope. mul(f, 5) yields a Fraction(15, 4) without a murmur. Huh? Aren't there two possible conversions? mul(fraction2Double(f), 5.toDouble) andmul(f, int2Fraction(5)). Shouldn't it be ambiguous? I had a hard time reading the spec (the infamous section 6.26), so I started an email thread. People had various conflicting theories. None of them were able to account for the fact that the seemingly identical

def mul(a: Double, b: Float) = a * b
def mul(a: Fraction, b: Fraction) = a * b

is ambiguous.

Daniel Sobral figured it out, not by reasoning from experience or common sense, but by reading the spec. When choosing among overloaded methods, Scala prefers themost specific.

Why is mul(Fraction, Fraction) more specific thanmul(Double, Double)? Consider a call mul(0.5, 0.5). You can't use the first method—there is noDoubleFraction conversion. But with mul(f, f), either one will work. So, the second version is more general—it works in strictly more cases. The first version is more specific. More specific is considered better.

That's perhaps more intuitive in the case of inheritance. You'd want to prefer a fun(Person) over afun(Object) when the parameter is aPerson or Student. Specific is better.

If you think this is yet another proof that Scala is more complex than Java, click here and weep. The Java spec is strictly more complex in this regard.

What's the point? In a language that isn't formally specified, these rules can and do change on a whim, as implementors fine-tune the compiler to achieve this or that pretty effect. You have no recourse if your code breaks as a result.

In Scala, there is a language specification, and the behavior isn't likely to change, except by a conscious effort. If something doesn't work according to spec, I can file a bug, and there is no discussion whether it is a bug or not.

Some time ago, there was a discussion in the Java Champions mailing list whether there was any value to having multiple implementations of the JDK. Some people thought it was fine that the open-source implementors had only one choice—take OpenJDK and tweak it. Me, not so much. I am a huge fan of multiple implementations. It puts pressure on the spec authors to separate essential and ephemeral complexity, and, of course, it contributes to specs that are comprehensible and implementable. Nobody would dream of having just one implementation of HTML or C++, so why be satisfied with one implementation of the Java platform?

I finished my “modern programming languages” course at the Ho Chi Minh City University of Technology. We covered metaprogramming (with Ruby and Rails), continuations (with Racket and Scala), concurrency (with Scala and Clojure), and finished off with a dose of Haskell to see type inference and type classes in action. Here are the hardy souls who stuck it out until the end. (All but one, that is--one is behind the camera.)

What's the fun of teaching if you don't also get to learn something new? It was about time for me to understand the fuss about monads. The blogosphere is awash in articles proclaiming thatmonads are elephants, monads are burritos, monads are space suits, and so on. Naturally, these analogies left me unenlightened.

Along the way, I made the mistake of looking for tutorials that tried to explain monads using Scala (such as this one). I should have known better. A few years ago, I had been baffled by blog posts that tried to teach continuations with C++ or Java, with a lot of pointless machinery and little to show for. Finally, I took a deep breath and figured out how continuations work in Scheme. It is always best to learn these things in their natural habitat.

Ok, time to figure out monads in Haskell. This article is very good, but I kept scratching my head at the monad laws

    return x >>= f ≡ f x
    c >>= return ≡ c
    c >>= (λx . f x >>= g) ≡ (c >>= f) >>= g

I could verify the laws for a bunch of monads, but they seemed arbitrary and ugly. Then I ran across this page, with a casual remark that the laws look much better when formulated in terms of the “Kleisli composition operator”>=>:

    f >=> return ≡ f
    return >=> g ≡ g
    (f >=> g) >=> h ≡ f >=> (g >=> h)

My ah-ha moment came when I saw the signature of that mysterious operator:

(a -> m b) -> (b -> m c) -> (a -> m c)

Here, we have two functions that take ordinary values and turn them into values in some alternate universe (Maybe tor IO t or whatever). And now we want to compose two of these functions to get another. Of course we do.

I instantly realized that, like so many other people, I had invented monads before. It was in a cell phone library that was shared between Android and BlackBerry apps. I had to deal with multi-step user interface actions. In each step, the user was asked some question. The answer only emerged when the user did some platform-specific input. I needed a mechanism for doing multiple steps in a row. I built a mini-framework of actions, each of which had an event handler for delivering the answer. Actions could be composed. The first handler kicked off the second action and the second handler returned the final answer. Yes, composition was associative :-) Unbeknownst to myself, that was my first monad.

What's the point? Had I known what monads are, I would have arrived at my design more quickly. I would have picked off-the-shelf convenience functions instead of inventing my own as the need became apparent. Monads aren't burritos, they are a design pattern.

I could go on and explain, but I won't, for the reason that is so ably expressed here.