(Dynamic) Multiple Dispatch in Julia

Multiple dispatch can be thought of as a generalization of object-oriented (OO) programming.

In a typical OO language like Python, an object type (class) owns certain methods (functions), and are typically called via

object.method(arg1, arg2)

Depending on the type of object, the runtime system will dispatch to different method definitions.

In Julia, the same call would be "spelled" differently:

method(object, arg1, arg2)

Spelled this way, you should notice something odd about OO programming: why is the first argument so special?

Traditional OO programming corresponds to single dispatch: the runtime chooses method based on the type of the first argument only. Julia implements multiple dispatch: the runtime chooses method based on the types of all the arguments.

Example: Binary mathematical operators

A classic example of the need for multiple dispatch is the case of binary math operators. If you compute x * y, the definition of the * function depends upon both the arguments, not just on x.

Julia defines many versions of the * function:


In [1]:
methods(*)


Out[1]:
115 methods for generic function *:
  • *(x::Bool,y::Bool) at bool.jl:41
  • *{T<:Unsigned}(x::Bool,y::T<:Unsigned) at bool.jl:56
  • *(x::Bool,z::Complex{T<:Real}) at complex.jl:116
  • *{T<:Number}(x::Bool,y::T<:Number) at bool.jl:52
  • *(z::Complex{T<:Real},x::Bool) at complex.jl:117
  • *(y::Number,x::Bool) at bool.jl:58
  • *{T,S}(A::Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2}),B::Union(DenseArray{S,2},SubArray{S,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)})) at linalg/matmul.jl:116
  • *(A::Union(Hermitian{T},Symmetric{T}),B::Union(Hermitian{T},Symmetric{T})) at linalg/symmetric.jl:35
  • *(A::Union(Hermitian{T},Symmetric{T}),B::Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2})) at linalg/symmetric.jl:36
  • *(A::Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2}),B::Union(Hermitian{T},Symmetric{T})) at linalg/symmetric.jl:37
  • *{T<:Union(Float64,Complex{Float32},Float32,Complex{Float64}),S<:Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2}),UpLo,IsUnit}(A::Triangular{T<:Union(Float64,Complex{Float32},Float32,Complex{Float64}),S<:Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2}),UpLo,IsUnit},B::Triangular{T<:Union(Float64,Complex{Float32},Float32,Complex{Float64}),S<:Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2}),UpLo,IsUnit}) at linalg/triangular.jl:23
  • *(A::Tridiagonal{T},B::Triangular{T,S<:AbstractArray{T,2},UpLo,IsUnit}) at linalg/triangular.jl:206
  • *{TA,TB,SA<:AbstractArray{T,2},SB<:AbstractArray{T,2},UpLoA,UpLoB,IsUnitA,IsUnitB}(A::Triangular{TA,SA<:AbstractArray{T,2},UpLoA,IsUnitA},B::Triangular{TB,SB<:AbstractArray{T,2},UpLoB,IsUnitB}) at linalg/triangular.jl:209
  • *(Da::Diagonal{T},Db::Diagonal{T}) at linalg/diagonal.jl:53
  • *(A::Union(SymTridiagonal{T},Tridiagonal{T},Diagonal{T},Bidiagonal{T},Triangular{T,S<:AbstractArray{T,2},UpLo,IsUnit}),B::Union(SymTridiagonal{T},Tridiagonal{T},Diagonal{T},Bidiagonal{T},Triangular{T,S<:AbstractArray{T,2},UpLo,IsUnit})) at linalg/bidiag.jl:114
  • *{T<:Union(Int16,Int8,Int32)}(x::T<:Union(Int16,Int8,Int32),y::T<:Union(Int16,Int8,Int32)) at int.jl:18
  • *{T<:Union(Uint8,Uint16,Uint32)}(x::T<:Union(Uint8,Uint16,Uint32),y::T<:Union(Uint8,Uint16,Uint32)) at int.jl:22
  • *(x::Int64,y::Int64) at int.jl:47
  • *(x::Uint64,y::Uint64) at int.jl:48
  • *(x::Int128,y::Int128) at int.jl:586
  • *(x::Uint128,y::Uint128) at int.jl:587
  • *(x::Float32,y::Float32) at float.jl:123
  • *(x::Float64,y::Float64) at float.jl:124
  • *(z::Complex{T<:Real},w::Complex{T<:Real}) at complex.jl:112
  • *(x::Real,z::Complex{T<:Real}) at complex.jl:118
  • *(z::Complex{T<:Real},x::Real) at complex.jl:119
  • *(x::Rational{T<:Integer},y::Rational{T<:Integer}) at rational.jl:118
  • *(a::Float16,b::Float16) at float16.jl:132
  • *(x::BigInt,y::BigInt) at gmp.jl:195
  • *(a::BigInt,b::BigInt,c::BigInt) at gmp.jl:218
  • *(a::BigInt,b::BigInt,c::BigInt,d::BigInt) at gmp.jl:224
  • *(a::BigInt,b::BigInt,c::BigInt,d::BigInt,e::BigInt) at gmp.jl:231
  • *(x::BigInt,c::Union(Uint8,Uint16,Uint64,Uint32)) at gmp.jl:265
  • *(c::Union(Uint8,Uint16,Uint64,Uint32),x::BigInt) at gmp.jl:269
  • *(x::BigInt,c::Union(Int16,Int8,Int32,Int64)) at gmp.jl:271
  • *(c::Union(Int16,Int8,Int32,Int64),x::BigInt) at gmp.jl:275
  • *(x::BigFloat,y::BigFloat) at mpfr.jl:149
  • *(x::BigFloat,c::Union(Uint8,Uint16,Uint64,Uint32)) at mpfr.jl:156
  • *(c::Union(Uint8,Uint16,Uint64,Uint32),x::BigFloat) at mpfr.jl:160
  • *(x::BigFloat,c::Union(Int16,Int8,Int32,Int64)) at mpfr.jl:164
  • *(c::Union(Int16,Int8,Int32,Int64),x::BigFloat) at mpfr.jl:168
  • *(x::BigFloat,c::Union(Float64,Float16,Float32)) at mpfr.jl:172
  • *(c::Union(Float64,Float16,Float32),x::BigFloat) at mpfr.jl:176
  • *(x::BigFloat,c::BigInt) at mpfr.jl:180
  • *(c::BigInt,x::BigFloat) at mpfr.jl:184
  • *(a::BigFloat,b::BigFloat,c::BigFloat) at mpfr.jl:255
  • *(a::BigFloat,b::BigFloat,c::BigFloat,d::BigFloat) at mpfr.jl:261
  • *(a::BigFloat,b::BigFloat,c::BigFloat,d::BigFloat,e::BigFloat) at mpfr.jl:268
  • *(x::MathConst{sym},y::MathConst{sym}) at constants.jl:23
  • *{T<:Number}(x::T<:Number,y::T<:Number) at promotion.jl:189
  • *(x::Number,y::Number) at promotion.jl:159
  • *{T<:Number}(x::T<:Number,D::Diagonal{T}) at linalg/diagonal.jl:50
  • *(s::Real,p::Vec2) at graphics.jl:64
  • *(s::Real,bb::BoundingBox) at graphics.jl:151
  • *(x::Number) at operators.jl:72
  • *{T<:Union(Float64,Complex{Float32},Float32,Complex{Float64}),S}(A::Union(SubArray{T<:Union(Float64,Complex{Float32},Float32,Complex{Float64}),2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T<:Union(Float64,Complex{Float32},Float32,Complex{Float64}),2}),x::Union(SubArray{S,1,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{S,1})) at linalg/matmul.jl:67
  • *(A::SymTridiagonal{T},B::Number) at linalg/tridiag.jl:59
  • *(A::Tridiagonal{T},B::Number) at linalg/tridiag.jl:249
  • *{T<:Union(Float64,Complex{Float32},Float32,Complex{Float64}),S<:Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2}),UpLo,IsUnit}(A::Triangular{T<:Union(Float64,Complex{Float32},Float32,Complex{Float64}),S<:Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2}),UpLo,IsUnit},B::Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},SubArray{T,1,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2},DenseArray{T,1})) at linalg/triangular.jl:24
  • *{T,S,UpLo,IsUnit}(A::Triangular{T,S,UpLo,IsUnit},x::Number) at linalg/triangular.jl:157
  • *{TvA,TiA}(X::Triangular{T,S<:AbstractArray{T,2},UpLo,IsUnit},A::SparseMatrixCSC{TvA,TiA}) at linalg/sparse.jl:129
  • *{T,S<:AbstractArray{T,2},UpLo,IsUnit}(A::Triangular{T,S<:AbstractArray{T,2},UpLo,IsUnit},B::Union(AbstractArray{T,2},AbstractArray{T,1})) at linalg/triangular.jl:210
  • *{TA,Tb}(A::Union(QRPackedQ{TA},QRCompactWYQ{TA}),b::Union(DenseArray{Tb,1},SubArray{Tb,1,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)})) at linalg/factorization.jl:277
  • *{TA,TB}(A::Union(QRPackedQ{TA},QRCompactWYQ{TA}),B::Union(DenseArray{TB,2},SubArray{TB,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)})) at linalg/factorization.jl:283
  • *{TA,TQ,N}(A::Union(SubArray{TA,N,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{TA,N}),Q::Union(QRCompactWYQ{TQ},QRPackedQ{TQ})) at linalg/factorization.jl:345
  • *(W::Woodbury{T},B::Union(SubArray{T,2,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},SubArray{T,1,A<:DenseArray{T,N},I<:(Union(Range{Int64},Int64)...,)},DenseArray{T,2},DenseArray{T,1})) at linalg/woodbury.jl:70
  • *{T<:Number}(D::Diagonal{T},x::T<:Number) at linalg/diagonal.jl:51
  • *(D::Diagonal{T},V::Array{T,1}) at linalg/diagonal.jl:54
  • *(A::Array{T,2},D::Diagonal{T}) at linalg/diagonal.jl:55
  • *(D::Diagonal{T},A::Array{T,2}) at linalg/diagonal.jl:56
  • *(A::Bidiagonal{T},B::Number) at linalg/bidiag.jl:108
  • *{T}(A::Bidiagonal{T},B::AbstractArray{T,1}) at linalg/bidiag.jl:119
  • *(B::BitArray{2},J::UniformScaling{T<:Number}) at linalg/uniformscaling.jl:68
  • *(S::SparseMatrixCSC{Tv,Ti<:Integer},J::UniformScaling{T<:Number}) at linalg/uniformscaling.jl:70
  • *{T}(G1::Givens{T},G2::Givens{T}) at linalg/givens.jl:197
  • *(G::Givens{T},B::BitArray{2}) at linalg/givens.jl:198
  • *{TBf,TBi}(G::Givens{T},B::SparseMatrixCSC{TBf,TBi}) at linalg/givens.jl:199
  • *(R::Union(Rotation{T},Givens{T}),A::AbstractArray{T,2}) at linalg/givens.jl:200
  • *{Tv,Ti}(A::SparseMatrixCSC{Tv,Ti},B::SparseMatrixCSC{Tv,Ti}) at linalg/sparse.jl:143
  • *{TvA,TiA,TvB,TiB}(A::SparseMatrixCSC{TvA,TiA},B::SparseMatrixCSC{TvB,TiB}) at linalg/sparse.jl:4
  • *{TvA,TiA}(A::SparseMatrixCSC{TvA,TiA},X::BitArray{1}) at linalg/sparse.jl:11
  • *{TvA,TiA}(A::SparseMatrixCSC{TvA,TiA},X::BitArray{2}) at linalg/sparse.jl:109
  • *{TA,S,Tx}(A::SparseMatrixCSC{TA,S},x::AbstractArray{Tx,1}) at linalg/sparse.jl:32
  • *(X::BitArray{1},A::SparseMatrixCSC{Tv,Ti<:Integer}) at linalg/sparse.jl:95
  • *{TvA,TiA,TX}(A::SparseMatrixCSC{TvA,TiA},X::AbstractArray{TX,2}) at linalg/sparse.jl:111
  • *{TvA,TiA}(X::BitArray{2},A::SparseMatrixCSC{TvA,TiA}) at linalg/sparse.jl:126
  • *{TX,TvA,TiA}(X::Tridiagonal{TX},A::SparseMatrixCSC{TvA,TiA}) at linalg/sparse.jl:128
  • *{T<:Number}(x::AbstractArray{T<:Number,N}) at abstractarray.jl:363
  • *(B::Number,A::SymTridiagonal{T}) at linalg/tridiag.jl:60
  • *(B::Number,A::Tridiagonal{T}) at linalg/tridiag.jl:250
  • *{T,S,UpLo,IsUnit}(x::Number,A::Triangular{T,S,UpLo,IsUnit}) at linalg/triangular.jl:167
  • *(B::Number,A::Bidiagonal{T}) at linalg/bidiag.jl:109
  • *(A::Number,B::AbstractArray{T,N}) at abstractarray.jl:367
  • *(A::AbstractArray{T,N},B::Number) at abstractarray.jl:368
  • *(s::String...) at string.jl:76
  • *{T,S<:AbstractArray{T,2},UpLo,IsUnit}(A::AbstractArray{T,2},B::Triangular{T,S<:AbstractArray{T,2},UpLo,IsUnit}) at linalg/triangular.jl:211
  • *{TX,TvA,TiA}(X::AbstractArray{TX,2},A::SparseMatrixCSC{TvA,TiA}) at linalg/sparse.jl:131
  • *{T,S}(A::AbstractArray{T,2},x::AbstractArray{S,1}) at linalg/matmul.jl:71
  • *{T1,T2}(X::AbstractArray{T1,1},A::SparseMatrixCSC{T2,Ti<:Integer}) at linalg/sparse.jl:99
  • *(A::AbstractArray{T,1},B::AbstractArray{T,2}) at linalg/matmul.jl:74
  • *(J1::UniformScaling{T<:Number},J2::UniformScaling{T<:Number}) at linalg/uniformscaling.jl:67
  • *(J::UniformScaling{T<:Number},B::BitArray{2}) at linalg/uniformscaling.jl:69
  • *{Tv,Ti}(J::UniformScaling{T<:Number},S::SparseMatrixCSC{Tv,Ti}) at linalg/uniformscaling.jl:71
  • *(A::AbstractArray{T,2},J::UniformScaling{T<:Number}) at linalg/uniformscaling.jl:72
  • *(J::UniformScaling{T<:Number},A::Union(AbstractArray{T,2},AbstractArray{T,1})) at linalg/uniformscaling.jl:73
  • *(x::Number,J::UniformScaling{T<:Number}) at linalg/uniformscaling.jl:75
  • *(J::UniformScaling{T<:Number},x::Number) at linalg/uniformscaling.jl:76
  • *{Tv<:Float64}(A::CholmodSparse{Tv<:Float64,Int32},B::CholmodSparse{Tv<:Float64,Int32}) at linalg/cholmod.jl:496
  • *{Tv<:Float64}(A::CholmodSparse{Tv<:Float64,Int64},B::CholmodSparse{Tv<:Float64,Int64}) at linalg/cholmod.jl:496
  • *{Tv<:Union(Float64,Complex{Float64})}(A::CholmodSparse{Tv<:Union(Float64,Complex{Float64}),Ti<:Union(Int64,Int32)},B::CholmodDense{Tv<:Union(Float64,Complex{Float64})}) at linalg/cholmod.jl:870
  • *{Tv<:Union(Float64,Complex{Float64})}(A::CholmodSparse{Tv<:Union(Float64,Complex{Float64}),Ti<:Union(Int64,Int32)},B::Union(Array{Tv<:Union(Float64,Complex{Float64}),1},Array{Tv<:Union(Float64,Complex{Float64}),2})) at linalg/cholmod.jl:871
  • *(p::Vec2,s::Real) at graphics.jl:62
  • *(bb::BoundingBox,s::Real) at graphics.jl:147
  • *(a,b,c) at operators.jl:82
  • *(a,b,c,xs...) at operators.jl:83

We can add new methods to a given function at any time. The methods don't "belong" to a particular type, and aren't part of the type's definition.

For example, string concatenation in Julia is done via *:


In [2]:
"hello" * "world"


Out[2]:
"helloworld"

In [3]:
"hello" + "world"


`+` has no method matching +(::ASCIIString, ::ASCIIString)
while loading In[3], in expression starting on line 1

But we can easily extend + to support a concatenation for strings, if we want:


In [4]:
import Base.+ # we must import a method to add methods (as opposed to replacing it)
+(x::String, y::String) = x * " " * y


Out[4]:
+ (generic function with 120 methods)

In [5]:
"hello" + "world"


Out[5]:
"hello world"

Not the same as C++ overloading: Dynamic vs. static

This may look a lot like function overloading in languages like C++. The difference is that C++'s overloading is static (= dispatch at compile-time), whereas Julia's overloading is dynamic (= dispatch at run-time), like OO polymorphism.

For example, now that we've defined +, we can use strings with any previously defined function that requires a + operation, like sum (summation):


In [6]:
sum(["The", "quick", "brown", "fox", "jumped", "over", "the", "lazy", "dog."])


Out[6]:
"The quick brown fox jumped over the lazy dog."

Type declarations are "function filters"

Type declarations are not required for performance — Julia automatically specializes a function on its argument types during compilation. They act like filters, allowing us to specify which functions are used when.

Without this, in a language like Python, you sometimes have to write manual function filters like this example from Matplotlib's quiver.py:

def _parse_args(*args):
    X, Y, U, V, C = [None] * 5
    args = list(args)
    # The use of atleast_1d allows for handling scalar arguments while also
    # keeping masked arrays
    if len(args) == 3 or len(args) == 5:
        C = np.atleast_1d(args.pop(-1))
    V = np.atleast_1d(args.pop(-1))
    U = np.atleast_1d(args.pop(-1))
    if U.ndim == 1:
        nr, nc = 1, U.shape[0]
    else:
        nr, nc = U.shape
    if len(args) == 2: # remaining after removing U,V,C
        X, Y = [np.array(a).ravel() for a in args]
        if len(X) == nc and len(Y) == nr:
            X, Y = [a.ravel() for a in np.meshgrid(X, Y)]
    else:
        indexgrid = np.meshgrid(np.arange(nc), np.arange(nr))
        X, Y = [np.ravel(a) for a in indexgrid]
    return X, Y, U, V, C

In Julia, you could define different methods for differing numbers of arguments, arrays vs. scalars, etcetera (all eventually calling a single lower-level function to do the work once the arguments have been transformed).