Content
Type inference versus type checking
Typing rules and typing derivations
Type checking/inference in Go (and some design choices in Go)
Programming language semantics: Dynamic versus static semantics
Simply-typed lambda calculus
Proofs are programs
Polymorphism
Type inference versus type checking
package main
import "fmt"
func main() {
var x int
x = 1
y := x + 1
fmt.Printf("y = %d", y)
}
The type of y
is inferred by the right-hand side.
Pretty convenient!
Languages like Haskell support full type inference where the types of functions can be inferred.
Type inference refers to the process of automatically inferring the type of program text (variables, expressions, functions, ...).
Type checking refers to the process of automatically checking if the provided (user-annotated) type information matches the progrm text.
Sometimes, the term type inference used in a context where we only carry out type checking. Unless a clear distinction between inference and checking is necessary, we often use the (more general) term type inference instead of type checking.
Both methods can either be carried out statically (at compile-time) or dynamically (at run-time).
Both methods rely on typing rules that specify the conditions under which some program text is type correct. For example, for an assignment statement we demand that the types of the expressions of the left-hand and right-hand side must be identical. There is an entire branch of computer science devoted in the formalization of such typing rules and how to effectively implement via type checking/inference.
Typing rules and typing derivations
Why do we need formal typing rules?
Consider the informal typing rules for arithmetic expressions.
The issue is that
We seek a formal system in which
We use a formal system represented by some typing rules. A typing rules makes statement about the well-typing of expressions. Expressions may contain free variables. So, we need to know the types of these variables.
We assume an environment G of the following form.
G denotes a set of type/variable declarations of the form
{x1 : int, ..., xn : int}
where x1, ..., xn are distinct.
We refer to xi : int
as a type assignemt.
We write G |- e : int
to denote that expression e
has type int
under the environment G
. We refer to G |- e : int
as a typing judgment.
We characterize all valid typing judgments in terms of typing rules.
x : int is an element in G
(Var) -------------
G |- x : int
i is a natural number
(Int) ---------------
G |- i : int
G |- e1 : int G |- e2 : int
(Add) ---------------------------------
G |- e1 + e2 : int
Above to be read as "if-then" rules. The part above the ---
is the precondition (premise) and the part below is the postcondition (conclusion).
We verify that { x:int, y:int } |- x + (y + 1) : int
is valid (derivable) by building a typing derivation tree. The important obvservation is that all typing rules are syntax-directed. So, based on the syntax of expressions we can decide which typing rule to apply.
y : int in { x:int, y:int } 1 is a natural number
(Var) -------------------------- (Int) -------------------------
x : int in { x:int, y:int } { x:int, y:int } |- y : int { x:int, y:int } |- 1 : int
(Var) --------------------------- (Add) -------------------------------------------------------------
{ x:int, y:int } |- x : int { x:int, y:int } |- y + 1 : int
(Add) ---------------------------------------------------------------------------
{ x:int, y:int } |- x + (y + 1) : int
It is rather straightforward here to build the typing derivation tree. Assuming the expression is representd by some AST (= Abstract Syntax Tree), the type checker/inferencer simply needs to recurse over the AST (by structural induction) where for each kind of expression we have a checker/inference function that applies the respective typing rule.
Further (implementation) details follow as part of the project.
Type checking/inference in Go (and some design choices in Go)
We take a brief look at type checking/inference in Go. Consider the following example.
package main
import "fmt"
func f(x int) int {
var y int
y = x + 1
return y
}
func f2(x int) int {
y := x + 1
return y
}
func main() {
var x int
x = 1
fmt.Printf("%d \n", f(x))
fmt.Printf("%d \n", f2(x))
g := f
y := 1
fmt.Printf("%d \n", g(y))
}
Via x := e
we indicate that on the left-hand side we introduce (declare) a fresh variable x
whose value is determined by the right-hand side e
. No explicit type annotations are provided as the type of x
can be determined by inferring the type of e
.
How does this (simple) form of type inference work? We can assume that the types of all variables on the right-hand side is known. So, we check by following the typing rules of the right-hand side expression is type correct. As part of this type checking process, we also infer the expression's type.
For example, consider the expression from above.
x + 1
where x is of type int
We type check x + 1 as follows.
We observe the structure of the expression.
In our case, we find addition with x and 1 as operands.
So rule (Add) from above applies.
We can conclude that x + 1 is of type int if
x is of type int and 1 is of type int.
This holds!
Notice something?
By the way, we infer the type of x + 1!
We traverse the structure of the expression (structural induction)
by going from the outer level (addition) to the inner level (operands).
What is full type inference? If we could omit all type annotations. In Go we still need to provide of function definitions. In other languages, e.g. Haskell, the type of function definitions is optional. We say that such languages support full type inference.
Let's consider our running example where all type annotations are omitted.
func f3(x) {
y = x + 1
return y
}
Let's attempt to reconstruct the missing type annotations by inspection of the program text.
1. Consider x + 1
Based on the above typing rules we deduce that
x + 1 is of type int and therefore
x is of type int as well.
2. Consider y = x + 1
The types of left-hand and right-hand side must match.
Hence, we deduce that y is of type int.
3. return y
implies that f3's return type is int
Putting the pieces together, we deduce the following.
func f3(x int) int {
var y int
y = x + 1
return y
}
Is that the only type we can give to this program text. Well, Go also supports float32. So, another possible annotation would be the following.
func f3(x float32) float32 {
var y float32
y = x + 1
return y
}
The program text x + 1
typechecks under the assumption that the integer constant 1
can be coerced into a floating point number.
The above issue, several types that can be inferred, highlights one of the challenges of type inference. The quest is to design a programming language that enjoys principal type inference. By principal we mean, type inference yields a most general type that subsumes all other types. For more details on this subject, please refer to the research literature.
Coming back to Go. None of the above types inferred for f3
is better than the other type. Hence, depenending which type we choose (infer) we might reject some program. For a concrete example see below.
package main
import "fmt"
/*
Suppose Go supports full type inference
func f3(x) {
y = x + 1
return y
}
*/
// The following types can be inferred for f3.
// Both types are incomparable as depending which type we choose
// we might reject some program. See below.
func f3a(x int) int {
var y int
y = x + 1
return y
}
func f3b(x float32) float32 {
var y float32
y = x + 1
return y
}
func main() {
var x, y int
var f, g float32
x = 1
y = 1
f = 1.0
g = 1.0
x = f3a(y) // OK
// g = f3a(f) // NOT OK
g = f3b(f) // OK
// x = f3b(y) // NOT OK
fmt.Printf("%d %d %f %f", x, y, f, g)
}
Programming language semantics: Dynamic versus static semantics
Motivation
Need a machine-independent description of the meaning of programs. We distinguish between syntax and semantics.
The valid syntax of programs described in terms of EBNF. Not every syntactically correct program is meaningful.
We distinguish among the static and dynamic semantics of programs.
Static = Compile-time.
Dynamic = Run-time.
Static semantics = Describe the meaning of programs without actually evaluating the program (at compile-time).
Dynamic semantics = Describe the run-time meaning of programs. Ignore low-level details (register allocation, etc). Sort of a reference implementation.
Static type systems
Predict the run-time behavior of a program.
Types classify values.
Many applications:
Simply-Typed Lambda Calculus
e ::= x | \x->e | e e AST of Lambda expressions
Implicitly typed.
Dynamic semantics
We describe the dynamic semantics in terms of a big-step operational semantics.
We write e1 => e2
to denote that the lambda term e1
evaluates to e2
.
x => x
\x -> e => \x -> e
e1 => \x -> e e2 => e3 [x |-> e3]e => e4
--------------------------------------------------
e1 e2 => e4
Typing rules (Static semantics)
e ::= x | \x->e | e e AST of Lambda expressions
t ::= t -> t Function types
| Int Primitives
| Bool
G ::= G cup G | { x:t } Variable environment
(Var) (x:t) in G
------------
G |- x : t
(App) G |- e1 : t1 -> t2 G |- e2 : t1
--------------------------------
G |- e1 e2 : t2
(Abs) G cup { x: t1 } |- e : t2
-------------------------
G |- \x->e : t1 -> t2
G |- e : t
states that under the environment G
, the expression e
has type t
.
The dashed line separates the premise from the conclusion. If we can establish the premise, the conclusion follows.
The operator cup
stands for set union.
- Technicality, we assume that each lambda abstraction introduces a distinct variable.
- Thus, we avoid name clashes in the environment.
- The alternative would be to "overwrite" earlier definitions but it's simpler to assume distinct lambda variables and use a simple set for the environment.
G
may contain primitives such as plus : Int -> Int -> Int
etc.
Well-typed expression are guaranteed not to go wrong.
That is, there is no exceptional behavior besides cases such as div 1 0
.
Such a guarantee does not apply to for example C. Why?
Well, in C and other languages we can easily bend the typing rules via type casts (either explicitly or implicitly)
Typing example \y -> \x -> x y
Here is the typing tree. It looks like an (up-side down) tree with the conclusion as the root at the bottom. Nodes are premises of typing rules. Leaves correspond to the (Var) base case.
(Var) (x : Int->Bool) in {y:Int,x:Int->Bool} (Var) (y:Int) in {y:Int,x:Int->Bool}
------------------------------------- --------------------------------
(App) {y:Int,x:Int->Bool} |- x : Int->Bool {y:Int,x:Int->Bool} |- y : Int
------------------------------------
(Abs) {y:Int,x:Int->Bool} |- x y : Bool
------------------------------------
(Abs) {y: Int} |- \x -> x y : (Int->Bool)->Bool
-----------------------------
{} |- \y -> \x -> x y : Int->((Int->Bool)->Bool)
Explicitly typed variant
e ::= x | \x:t->e | e e
(Abs-Exp) G cup { x: t1 } |- e : t2
-------------------------
G |- \x:t1->e : t1 -> t2
Notation
Recall that function application is left associative. However, when writing down type specifications, we assume that function types are right associative. Consider the Haskell example where we omit any parantheses.
apply :: (a->b) -> a -> b
apply f x = f x
inc x = x + 1
useCase = apply inc 1
Here's the version with parantheses.
apply :: (a->b) -> (a -> b)
apply f x = f x
inc x = x + 1
useCase = (apply inc) 1
Results
If {} |- e : t and e => e' then {} |- e' : t
{} |- \x -> x : t -> t
where t = Int
t = Bool
t = Int -> Int
...
Let e be an explictely typed lambda expression.
Suppose we have that G |- e : t1 and G |- e : t2.
Then, t1 = t2.
Proofs are programs
Logic
(Deduction) G, t |- t
(Modus Ponens) G |- t1 -> t2 G |- t1
--------------------------
G |- t2
t
stands for a proposition, like "It rains today"
t1 -> t2
stands for implication, like "If it rains today then we need an umbrella"
Deduction: Make use of any of our assumptions
Modus Ponens: If in t1 -> t2
we can establish t1
then we can follow (deduce) t2
Does the following statement hold: (a->b)->((c->a)->(c->b))
?
Proofs
If you look closely, the logical deduction rules almost match our typing rules?
Differences are the lambda terms.
Lambda terms serve as proofs.
Let's prove (a->b)->((c->a)->(c->b))
We find that
{} |- \x -> \y -> \z-> x (y z) : (a->b)->((c->a)->(c->b))
Informally,
x
is of type a->b
y
is of type c->a
z
is of type c
Hence, y z
is of type a
Hence, x (y z)
is of type b
Hence, \x -> \y -> \z-> x (y z)
has type (a->b)->((c->a)->(c->b))
We can view \x -> \y -> \z-> x (y z)
as a proof for the proposition (a->b)->((c->a)->(c->b))
Curry-Howard Isomorphism
Curry and Howard discovered the connection between proofs in logic and lambda terms.
Lots of cool applications. For example, simply state the type/propositions and automatically infer the program!
Polymorphism
Polymorphism ("greek)" = "many forms".
Polymorphism in programming languages: The ability that a function can be used on values of different types.
Various forms of polymorphism.
Subtype polymorphism
Parametric polymorphism
Ad-hoc polymorphism
Subtype polymorphism (aka inclusion polymorphism)
In general, if an expression is of type and is a subtype of , written , then can be used in any context where its expected type is .
(Sub) G |- e : t1 t1 <= t2
---------------------
G |- e : t2
Let's take a look at some specific examples of this (subtyping) principle.
Coercive subtyping
int func(float x) {
if (x < 1.0) return 1;
return 0;
}
int main() {
int arg = 1;
float res;
res = func(arg);
}
The point is that in the above described situation, the function of type Float -> Int
behaves like a function of type Int -> Float
.
This is safe, because we can safely coerce the integer argument into floating-point value. The same applies to the return value.
Instrumentation
Coercion a value of type Int
into a value of type Float
implies a change in (data) representation. The actual value remains unchanged however.
A standard method is to instrument the program based on the program's typing. Assuming we have a function coerce : Int -> Float
, instrumention of the above program yields.
int main() {
int arg = 1;
float res;
res = coerce(func(coerce(arg)));
}
Insertion of the coercion is guided by the subtyping rule.
We write
G |- e : t --> E
to denote a source program e with type t where E is the instrumented program.
Similar to typing rules, we require rules that tell us how to build coercions.
We write
|- t1 <= t2 --> c
to denote that t1 is a coercive subtype of t2 where c is the coercion function.
(ItoF) |- Int <= Float --> coerce1
(FtoD) |- Float <= Double --> coerce2
|- t1 <= t2 --> c1 |- t2 <= t3 --> c2
(Trans) ------------------------------------------
|- t1 <= t3 -> c2 . c1
where c2 . c1 denotes function composition and
is equivalent to \x -> c2(c1(x))
The coercive subtyping rule is as follows.
G |- e : t1 --> E |- t1 <= t2 --> c
(CoerceSub) --------------------------------------
G |- e : t2 --> c(E)
Nominal subtyping
Types in subtype relation must be declared explicitely. See for example (sub)class declarations in Java and C++.
class Point {
int x,y;
}
class ColoredPoint : Point {
int colorCode;
}
We have that ColoredPoint <= Point
. So, anywhere where a Point object is expected, we can also use an object of type ColoredPoint.
Structural subtyping
Consider
class Point {
int x,y;
}
class ColoredPoint : Point {
int colorCode;
}
class CPoint {
int x, y, colorCode;
}
Objects of type ColoredPoint and CPoint are structurally equivalent. Every entry in CPoint is also found in ColoredPoint and vice versa. So, Point is a structural subtype of CPoint.
Behavioral subtyping (aka Liskov substitution principle = LSP)
An expression is of type is a behavioral subtype of , written , then can be used in any context where its expected type is without changing the behavior of the program.
Inheritance
Reuse of an existing implementation (method) and possibly override an existing implementation in a derived class. For example, consider
class A {
int m1() { return 1; }
bool m2() { return false; }
}
class B : A {
int m1() { return 2; }
}
Various forms of inheritance:
In general, LSP is no longer satisfied. Consider
class Bird {
void fly() { ... }
}
class Eagle : Bird {
void fly() { ... }
}
class Penguin : Bird {
void fly() { ... } // can NOT fly !!!
}
func(Bird b) {
b.fly();
}
main() {
Bird b;
Eagle e;
Penguin p;
func(b);
func(e);
func(p); // behavior changes!
}
To guarantee LSP, need to impose stronger conditions (often formalized via "contracts"), or we can restrict the language. For example, in C++, if we only use static methods, LSP is guaranteed.
Strictly speaking, inheritance and subtyping are different concepts (but often they coincide).
Subtyping = subtype compatability, use an expression of subtype instead.
Inheritace = reuse of implementations.
In languages such as C++ and Java, subtypig can be expressed in terms of inheritance.
Function subtyping
Recall
int func(float x) {
if (x < 1.0) return 1;
return 0;
}
int main() {
int arg = 1;
float res;
res = func(arg); // P
}
We find func : float -> int
.
At programpoint P, func
is used in the context int -> float
.
So, we can argue that float -> int <= int -> float
.
Subtyping among function in general:
We say, functions are contra-variant in the argument and co-variant in the result.
Array subtyping
Co-variant?
Contra-variant?
Conclusion: Arrays ought to be invariant.
Array subtyping in Java
Array subtyping in Java is co-variant!
Hence, the following program is well-typed and will crash!
void main() {
String[] s;
s = new String[10];
test(s);
}
void test (Object [] a) {
a[1] = new Point(10,20); // Crash!
}
OK, the crash leads to an exception, still, didn't we just learn that array subtyping ought to be invariant?
Well, thanks to co-variant subtyping we can supply the programmer with "generic" array operations. We are using Object
to mimic genericity here.
These operations are safe as long as we only read but not write (recall the earlier reasoning for invariance of array subtyping).
Of course, even co-variant array subtyping is safe due to run-time checks (we might encounter an exception).
Recall function subtyping
Earlier we argued that the following ought to hold:
class Point {
int x,y;
}
class ColoredPoint : Point {
int colorCode;
}
We have that ColoredPoint <= Point
.
Assuming strictly co-variance, we find that ColoredPoint -> Int <= Point -> Int
.
Let's define the following function
int extract(ColoredPoint cp) {
return cp.colorCode;
}
extract
has type ColoredPoint -> Int
. Based on the subtyping principle and the fact that ColoredPoint -> Int <= Point -> Int
, we can use extract
in a context Point -> Int
. That is, pass a value of type Point
to extract
.
Will this work? No!
If we pass a value of type Point
to extract
, in the function body we attempt to access the element colorCode
which does not exist. Hence, we obtain some run-time failure.
Parametric polymorphism
A form of polymorphism where a function is implemented without mention of any specific type, i.e. the implementation is parametric. The function will behave the same for specific instances.
Parametric polymorphic type system
We take a look at the typing theory behind 'generics'.
Formally, this is know as parametric polymorphism. The term polymorphism comes from greek and roughly stands for "many forms/shapes". We have already seen subtyping polymorphism.
Let's start with a classic example. What type should we give a sorting function?
For example, in OCaml we find
let apply f x = f x;;
val apply : ('a -> 'b) -> 'a -> 'b = <fun>
The first argument is a function of type ('a -> 'b)
which will be applied to the second argument of type 'a
.
The argument type of the function matches the type of the second argument.
The definition of apply
does not constraint 'a
and 'b
in any way.
We say that 'a
and 'b
are parametric variables.
Java and C++
There are plenty of Generics Java examples. Please check the literature.
What about templates in C++?
template<typename T>
void swap(T& x, T& y) {
T tmp;
tmp=x; x=y; y=tmp;
}
Templates are syntactic macro language which is expanded at compile-time.
This is vaguely connected to parametric polymorphism but it's something different.
If you're interested, check out the "Concepts" discussion where some brave people try to enrich C++ with some "real" polymorphism.
Parametric polymorphism typing rules
e ::= x | \x->e | e e | let x = e in e
t ::= t -> t Function types
| Int Primitives
| Bool
| a Type variable
sigma ::= t Simple type
| forall a1 ... an. t Quantified type
G ::= G cup G | { x:sigma } Variable environment
(Var) (x:sigma) in G
------------
G |- x : sigma
(App) G |- e1 : t1 -> t2 G |- e2 : t1
--------------------------------
G |- e1 e2 : t2
(Abs) G cup { x: t1 } |- e : t2
-------------------------
G |- \x->e : t1 -> t2
(Forall-Intro) G |- e : t
{a1, ..., an} in fv(t)
{a1, ..., an} not in fv(G)
----------------------------
G |- e : forall a1 ... an. t
(Forall-Elim) G |- e : forall a1 ... an. t
----------------------------
G |- e : [t1/a1, ..., tn/an] t
(Let) G |- e1 : sigma
G cup { x:sigma } |- e2 : t
-------------------------
G |- let x = e1 in e2 : t
As before, each lambda abstraction introduces a distinct variable. The same applies to let definitions.
Subtle point, we only quantify over simple types. Higher-rank polymorphism will not be discussed here.
The above system is also known as the Hindley/Milner system aka rank-1 parametric polymorphic lambda calculus with a let construct.
The rank indicates the nesting of quantifiers. In our case, rank 1 states that quantifiers only appear at the outermost level.
Auxilliary definitions
Free variables
fv(t) denotes the set of free variables in t.
We define fv(t) by structural induction:
fv(Int) = {}
fv(Bool) = {}
fv(a) = {a}
fv(t1->t2) = fv(t1) cup fv(t2)
In case of a type scheme, the free variables equal all free type variables
but those which are quantified.
fv(forall a1 ... an.t) = fv(t) - {a1,...,an}
The free variables in environment G equal the free variables of the types of variables.
fv({}) = {}
fv({ x : sigma}) = fv(sigma)
fv(G1 cup G2) = fv(G1) cup fv(G2)
Type instantiation
We say t is an instance of sigma if t can be derived from sigma by instiation of free type variables.
We write sigma <= t in such a case.
The formal definition of sigma <= t is as follows.
sigma <= sigma a type is an instance of it self
forall a1 ... an.t <= t' iff there exists t1 ... tn such that [a1 |-> t1,...,an |-> tn]t <= t'
Typing examples
Consider \x-> x y
:
(Var) {y : a1, x : a1->a2 } |- x : a1->a2
(App) (Var) {y : a1, x : a1->a2 } |- y : a1
---------------------------------------------------------
(Abs) {y : a1, x : a1->a2 } |- x y : a2
--------------------------------------------
(Forall-Intro)
{y : a1} |- \x-> x y : (a1->a2) -> a2
--------------------------------------------
{y : a1} |- \x-> x y : forall a2. (a1->a2) -> a2
We can quantify over a2
but not a1
.
Explicitly versus implicitly typed expression
Suppose we have a well-typed expression e
, that is, we have that G |- e : t
for some environment G
and type t
.
In Java/C++, expressions (programs) are explictly typed. That is, the type of each variable must be provided and in case of a generic expression applied in a specific context, the specialized type must be given.
template<typename T>
void swap(T& x, T& y) {
T tmp;
tmp=x; x=y; y=tmp;
}
int main() {
int i=1, j=2;
swap<int>(i,j);
char c='a', d='b';
swap<char>(c,d);
}
- Let's extend our expression language and make type abstraction and type application explicit. In addition, we allow for the explicit annotation of types of lambda bound variables.
We write /\ a ->
for type abstraction, e.g. written in C++ as template<typename a>
. We write e[t]
for type application, e.g. written in C++ as e<t>
.
e ::= x | \x->e | e e | let x = e in e
| \x:t-> e Type annoted abstraction
| /\ a -> e Type abstraction
| e[t] Type application
t ::= t -> t Function types
| Int Primitives
| Bool
| a Type variable
sigma ::= t Simple type
| forall a1 ... an. t Quantified type
G ::= G cup G | { x:sigma } Variable environment
(Var) (x:sigma) in G
------------
G |- x : sigma
(App) G |- e1 : t1 -> t2 G |- e2 : t1
--------------------------------
G |- e1 e2 : t2
(Abs) G cup { x: t1 } |- e : t2
-------------------------
G |- \x->e : t1 -> t2
(Abs-Annot) G cup { x: t1 } |- e : t2
-------------------------
G |- \x:t1->e : t1 -> t2
(Forall-Intro) G |- e : t
{a1, ..., an} in fv(t)
{a1, ..., an} not in fv(G)
----------------------------
G |- e : forall a1 ... an. t
(Forall-Elim) G |- e : forall a1 ... an. t
----------------------------
G |- e : [t1/a1, ..., tn/an] t
(Let) G |- e1 : sigma
G cup { x:sigma } |- e2 : t
-------------------------
G |- let x = e1 in e2 : t
(Ty-Abs) G |- e : t a in fv(t) a not in fv(G)
-------------------------------------------
G |- /\ a -> e : forall a.t
(Ty-App) G |- e : forall a.t1
------------------------------------
G |- e[t2] : [t2/a] t1
Recall our earlier (implicitly) typed example \x-> x y
:
(Var) {y : a1, x : a1->a2 } |- x : a1->a2
(App) (Var) {y : a1, x : a1->a2 } |- y : a1
---------------------------------------------------------
(Abs) {y : a1, x : a1->a2 } |- x y : a2
--------------------------------------------
(Forall-Intro)
{y : a1} |- \x-> x y : (a1->a2) -> a2
--------------------------------------------
{y : a1} |- \x-> x y : forall a2. (a1->a2) -> a2
Here's the explicit variant /\ a2 -> \x:(a1->a2)-> x y
:
(Var) {y : a1, x : a1->a2 } |- x : a1->a2
(App) (Var) {y : a1, x : a1->a2 } |- y : a1
---------------------------------------------------------
(Abs-Annot) {y : a1, x : a1->a2 } |- x y : a2
--------------------------------------------
(Ty-Abs)
{y : a1} |- \x:(a1->a2)-> x y : (a1->a2) -> a2
a2 in fv((a1->a2) -> a2) a2 not in fv({y : a1})
--------------------------------------------
{y : a1} |- /\ a2 -> \x:(a1->a2)-> x y : forall a2. (a1->a2) -> a2
Point to note. There is no more guessing involved once we strictly use explicitly typed expressions.
Fact. Every implicitly typed expression can be converted into an explicitly typed expression.
Extended example making use of the let
construct:
let f = /\ a2 -> \x:(a1->a2)-> x y
in f[Int] z
where we assume G = {y : a1, z : a1->Int}
.
Let's consider the (explicit) typing derivation:
(Ty-App) G' |- f : forall a2. (a1->a2) -> a2
[Int/a2] ((a1->a2) -> a2) = (a1->Int)->Int (z : a1->Int) in G'
------------------------------ (Var) -------------------
(App) G' |- f[Int] : (a1->Int)->Int G' |- z : a1->Int
G' = G cup { f : forall a2. (a1->a2) -> a2 }
-----------------------------------------------------------
(Let) G cup { f : forall a2. (a1->a2) -> a2 } |- f[Int] z : Int
G |- /\ a2 -> \x:(a1->a2)-> x y : forall a2. (a1->a2) -> a2 (see above)
-------------------------------------------------------
G |- let f = /\ a2 -> \x:(a1->a2)-> x y : Int
in f[Int] z
Some results
Type inference for the Hindley/Milner system is decidable and yields most general type. This result is due to Damas/Milner (W algorithm published in 1985).
The time complexity can be exponential (for mostly contrived examples).
Type checking for the explictely typed variant is rather straightforward.
Any implicitely typed program that is well-typed can be translated into an explicitely typed program.
Compilation of parametric polymorphism
Roughly, there are two possible approaches to compile programs which make use of parametric polymorphism.
Duplicate code for specific instantiations.
Avoid code duplication by using a universal representation of types.
The code duplication approach is fairly simply to implement via a type-directed translation scheme. There are two issues:
In case of a universal representation of types we may require reflection to cast the universal type into a more specific type. This may imply some (possibly costly) coercions.
There are improvements to this compilation scheme. See for example
- Compiling Polymorphism Using Intensional Type Analysis by Harper and Morrisett
Java
Java generics use a universial representation based on the Object
type. That is, after compile-time type checking, polymorphic types are erased and can therefore not be inspected at run-time. This leads to a few restrictions. For example,
<T> T instantiateElementType(List<T> arg) {
return new T(); //causes a compile error
}
yields a compile-time error because the type T
necessary to build the object is unknown at run-time.
Aside: The "costly" coercion issue not a problem in the Java setting. We can cast any object into Object
. For the other direction, we apply a type cast which is safe due to compile-time checking.
But polymorphic array creation is not possible in case of a homogenous translation of polymorphic code.
static Object[] foo(Object a) {
Object[] ob = new Object[1]
ob[0] = a
}
...
Integer i = new Integer(1)
Integer[] iArr = (Integer[])foo(i) //not possible
Ad-hoc polymorphism
A form of polymorphism where a function can be applied to different types where the behavior may depend on the specific type instances, i.e.~the implementation is ad-hoc.
Type classes (in Haskell)
data List a = Null | Cons a (List a)
member x Null = False
member x (Cons y ys)
| x == y = True
| otherwise = member x ys
What's the type? Here's what type inference tells as (to be explained in more detail below).
*Main> :t member
member :: Eq a => a -> List a -> Bool
member
works for any type
- provided we supply an instance of (type) class
Eq
==
is an overloaded function (operator) which belongs to type class Eq
Part of the Haskell Prelude (i.e. already defined).
class Eq a where
(==) :: a -> a -> Bool
A type class specifies the set of methods which shall be defined for this case. Eq t
expresses that type t
is a member of type class Eq
and hence we can assume that overloaded method (==)
on type t -> t -> Bool
is available.
Aside: We may associate more than one method per type class.
There are a number of pre-defined type class instances such as Eq Int
and so forth.
We can extend the set of types for which Eq
is defined.
data WeekDay = Mo
| Tue
| Wed
| Thur
| Friday
| Sat
| Sun
eqWD Mo Mo = True
eqWD Tue Tue = True
eqWD Wed Wed = True
eqWD Thur Thur = True
eqWD Friday Friday = True
eqWD Sat Sat = True
eqWD Sun Sun = True
eqWD _ _ = False
instance Eq WeekDay where
(==) = eqWD
Hence, the following application of member
is possible.
testMember = member Mo (Cons Tue (Cons Mo Null))
We can define complex type (class) relations.
instance Eq a => Eq (List a) where
Null == Null = True
(Cons x xs) == (Cons y ys) = x == y && xs == ys
The above defines equality for lists assuming we provide for equality among the elements.
Type-directed dictionary-translation
The (static) typing of programs is explained in terms of a variant of the Hindley/Milner system where we keep track of type class constraints resulting from the program text.
Recall
member x Null = False
member x (Cons y ys)
| x == y = True
| otherwise = member x ys
From the program text we derive that the first argument x
is of type a
and the second parameter is of type List a
and therefore y
is of type a
. The call x == y
implies the type class (constraint) Eq a
. Hence, member
has type Eq a => a -> List a -> Bool
where a
is universally quantified.
Let's consider another (typing) example.
Null
is of type List a
. Hence, the call xs == Null
implies the type class constraint Eq (List a)
. In Haskell, we try to reduce type class constraints as much as possible based on the given instances. From above, we find
instance Eq a => Eq (List a)
Therefore, Eq (List a)
is reduced to Eq a
. Recall that the instance declaration represents a recipe how to build on instance of Eq
on type List a
provided we are given an instance on type a
. Hence, isNull
has type Eq a => List a -> Bool
where a
is universally quantified.
The compiler exploits the typing information and instruments the program code with calls to the expected (type-specific) instances of the overloaded methods. For each constraint Eq a
we need to supply a dictionary value which represents an instance of (==)
on type a -> a -> Bool
. The process of compiling type class programs is called the dictionary-passing translation.
In Haskell, we represent the dictionary for Eq a
as a data type. The translation of member
is then as follows.
data DictEq a = DictEqVal (a->a->Bool)
unDict (DictEqVal f) = f
memberDict :: DictEq a -> a -> List a -> Bool
memberDict d x Null = False
memberDict (DictEqVal eq) x (Cons y ys)
| eq x y = True
| otherwise = memberDict (DictEqVal eq) x ys
Short summary:
- Think of a value of type
DictEq a
as the vtable
- Called dictionaries in Haskell speak
- The "dictionary-passing translation" inserts the appropriate dictionaries.
- This is based on the typing derivation where we effectively turn type class constraints into additional arguments (dictionaries).
Let's consider the dictionary-passing translation of the other examples.
testMember = member Mo (Cons Tue (Cons Mo Null))
translates to
testMemberDict = memberDict dictEqWD Mo (Cons Tue (Cons Mo Null))
where
instance Eq WeekDay where ...
instance Eq a => Eq (List a) where ...
translates to
dictEqWD :: DictEq WeekDay
dictEqWD = DictEqVal eqWD
dictEqList :: DictEq t -> DictEq (List t)
dictEqList (DictEqVal eq) =
DictEqVal (\x -> \y ->
case (x,y) of
(Null, Null) -> True
(Cons x xs, Cons y ys) ->
(eq x y) &&
(unDict (dictEqList (DictEqVal eq))) xs ys)
Coherence Issues
The meaning of type class programs is tied to the typing of programs. If possible we seek for a coherent translation scheme where different typings lead to the (semantically speaking) same translation. Unfortunately, coherence is lost in general. See the following example.
class ShowRead a where
show :: a -> String
read :: String -> a
Via show
we convert a value into a string (think of "printf") whereas read
performs the opposite transformation (think of "scanf").
The interplay of both methods leads to incoherence. Consider
readShow s = show (read s)
A string is transformed into a type specific format and then again transformed into a string. The issue is that the type specific format is not exactly specified. Function readShow
has type
readShow :: ShowRead a => String -> String
Notice the type class constraint ShowRead a
where the parameter a
is not constrained by the function type. Hence, when translating readShow
we can possibly use any ShowRead
instance (there are prefined instances for Int
, Float
, ...). Hence, different typings lead to different semantic meanings. Hence, coherence is lost.
Unambiguous types to guarantee coherence
In Haskell, the problem is fixed by disallowing programs such as readShow
based on an additional static type check. The type ShowRead a => String -> String
is considered ambiguous because of the unconstrained a
. Ambiguous programs are rejected. For unambiguous programs, we can guarantee that the dictionary-passing translation is coherent.
A consequence of rejecting ambiguous programs is the loss of principal types. We say a type is principal for a program if all other types can be derived (as instances) from this types. As we can see in the above example, the principal type is ambiguous. All unambiguous types are not principal.
A second look at overloading
In "A second look at overloading", Odersky, Wadler and Wehr suggest to restrict type classes to methods where only arguments may be overloaded. Technically, each method m
belonging to type class TC a
must be of the type a -> t
for some type t
. Notice that show
satisfies this condition whereas read
does not. Under this restriction, ambiguities cannot arise. That is, coherence and principal types hold in general for such restricted type class programs.
Extensions (Type class design space)
There exist many variants and extensions such as superclasses, multi-parameter type classes etc. Some extensions such as functional dependencies require extensions of the dictionary-passing translation (the target language needs to be enriched).
We briefly discuss superclasses.
States that for each intance Ord t
there must also an instance Eq t
. In general, this requirement can be checked statically (based on the set of available instances).
In terms of the dictionary-passing translation, each dictionary for Ord
also contains the respective dictionary for Eq
.
data DictOrd a = DictOrdVal (DictEq a) (a->a->Bool)
Method overloading and interfaces in Go
Parametric polymoprhism restricted to a few built-ins (channels).
Support for ad-hoc polymorphism by method overloading where method dispatch is based on the "receiver" type. The receiver type must be a user-defined type (defined via "type").
type Int struct { val int }
type MyFloat32 float32
func (i Int) addInt(x int) float32 {
return float32 (i.val + x)
}
func (f MyFloat32) addInt(x int) float32 {
return (float32(f)) + float32 (x)
}
func (i Int) multInt(x int) float32 {
return float32 (i.val * x)
}
Overloaded methods can be grouped together in interfaces. Interfaces can be nested but must be acyclic. Method names in an interface hierarchy must be unique but method names may appear in distinct interface hierarchies.
type ADDInt interface {
addInt(int) float32
}
type PLUS interface {
addInt(int) float32
}
type FOO interface {
addInt(int) int
}
type MULTInt interface {
multInt(int) float32
ADDInt
}
Typing of interfaces and method invocation is based on structural subtyping.
func useAdd(a ADDInt, x int) float32 {
return a.addInt(x)
}
func useAddMult(a MULTInt, x int) float32 {
return a.addInt(x) + a.multInt(x)
}
func use1(x PLUS) float32 {
return useAdd(x,1)
}
func use2(x ADDInt) float32 {
return use1(x)
}
func test() {
i := Int {1}
useAdd(i,1)
use2(i)
f := MyFloat32 (1.0)
useAddMult(f,1)
}
The form of ad-hoc polymorphism and structural subtyping in Go is in essence a poor man's version of "SystemO" style overloading.
Go uses a type-directed translation scheme which effectively resembles the dictionary-passing translation scheme.
A dictionary consists of the type of the receiver type and the list of methods defined on this receiver type. At the call site of a method, we simply lookup the respective instance in the dictionary provided. To reduce the lookup cost, Go performs the following optimizations.
Methods in interfaces are sorted (alphanumerically).
In case of structural subtyping among interfaces, the given dictionary is coerced in the expected dictionary by simple dropping not longer needed methods.
For example, consider
MULTInt <= ADDInt
which implies the coercion function
coerce (t,[addInt,multInt]) = (t,[addInt])
The upshot of this optimization is that at the call site no lookup is required as the dictionary is of the precise type.