Typeclasses are typically taught by drawing parallels with method overloading (another form of ad hoc polymorphism),
and Java-like interfaces (a form of subtyping polymorphism).
Even though typeclasses are a concept in their own right, it’s only natural to want to compare them to other familiar constructs.
This can lead to a lot of confusion, and the line that separates these things can be blurry.
So in this post I’ll try to focus on their differences and show that their similarities are only superficial.
Null. Ever since its debut in Algol W back in 1965, most programming languages have adopted this concept of nullability by default.
That is, a variable of (pretty much) every type can be assigned this special null value that represents the absence of an actual value.
Since then, its own maker has coined it his billion dollar mistake.
Nowadays, it’s common knowledge that null is a source of headaches, due to:
Being able to travel silently through the code before it explodes in your face,
Well, amazingly enough, it turns out null pointers don’t just cause bugs in programs, they cause bugs in type systems too!
[…] But unlike most null-pointer bugs, this one took 12 years to discover.
- Ross Tate
Today, I won’t bother going into those issues which have already been thoroughly debated.
Instead, I’ll take a step back and discuss why null inhibits correctness and hinders one’s ability to reason about code.