This session

  • Practical
  • Comparison to other languages
  • Multidimensional arrays
  • Types

Practical

Choose which topic you want to present.

If you want to discuss suitable topics, use the GitHub forum! (Or send me an email.)

https://github.com/rasmushenningsson/julia-study-circle

Comparison to other languages

Comments?

My view

  • Julia is more precise, i.e. Julia will give you an error rather than the wrong result. More on this later.
  • Julia is more consistent.
  • Julia code is easier to read.

Examples

Here are two examples when programming languages try to be too smart.

R

diag(c(8,9,10))

gives the result $$ \begin{pmatrix} 8 & 0 & 0 \\ 0 & 9 & 0 \\ 0 & 0 & 10 \\ \end{pmatrix}. $$ and

diag(c(8,9))

gives $$ \begin{pmatrix} 8 & 0 \\ 0 & 9 \\ \end{pmatrix}. $$ But

diag(c(8))

gives $$ \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix}. $$

Matlab

diag([1 2 3; 4 5 6])

gives $$ \begin{pmatrix} 1 \\ 5 \\ \end{pmatrix}. $$ but

diag([1 2 3])

gives $$ \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \\ \end{pmatrix}. $$

Both of these behaviors are quite ok when using R/Matlab interactively. But when the code is hidden deep inside layers of functions, it causes much more trouble than it's worth.

Julia solves this particular problem by having two different functions. Each defining a clear idea or protocol.


In [1]:
?diag


search: 
Out[1]:
diag(M[, k])

The kth diagonal of a matrix, as a vector. Use diagm to construct a diagonal matrix.

diag diagm diagind Diagonal isdiag spdiagm Bidiagonal blkdiag


In [2]:
?diagm


search:
Out[2]:
diagm(v[, k])

Construct a diagonal matrix and place v on the kth diagonal.

 diagm spdiagm diag diagind Diagonal isdiag Bidiagonal blkdiag

Multidimensional arrays

Construction

1-dimensional arrays are column vectors.


In [3]:
v = [9, 8, 7]


Out[3]:
3-element Array{Int64,1}:
 9
 8
 7

Matrices can be constructed in many ways.


In [4]:
A = [1 2 3; 4 5 6; 7 8 9]


Out[4]:
3x3 Array{Int64,2}:
 1  2  3
 4  5  6
 7  8  9

In [5]:
zeros(3,3)


Out[5]:
3x3 Array{Float64,2}:
 0.0  0.0  0.0
 0.0  0.0  0.0
 0.0  0.0  0.0

In [6]:
ones(2,4)


Out[6]:
2x4 Array{Float64,2}:
 1.0  1.0  1.0  1.0
 1.0  1.0  1.0  1.0

In [7]:
eye(3)


Out[7]:
3x3 Array{Float64,2}:
 1.0  0.0  0.0
 0.0  1.0  0.0
 0.0  0.0  1.0

In [8]:
rand(1,2)


Out[8]:
1x2 Array{Float64,2}:
 0.593645  0.700832

In [9]:
rand(1:10,3,4)


Out[9]:
3x4 Array{Int64,2}:
  5  2  3  10
 10  8  6   8
 10  2  7   6

In [10]:
randn(4,2)


Out[10]:
4x2 Array{Float64,2}:
 -0.498497  -2.20584 
  0.559886   1.54789 
  0.797429  -1.07115 
  0.100713   0.165639

In [11]:
A1 = [1 2; 3 4]


Out[11]:
2x2 Array{Int64,2}:
 1  2
 3  4

In [12]:
A2 = [1.0 2.0; 3.0 4.0]


Out[12]:
2x2 Array{Float64,2}:
 1.0  2.0
 3.0  4.0

In [13]:
zeros(A1)


Out[13]:
2x2 Array{Int64,2}:
 0  0
 0  0

In [14]:
zeros(A2)


Out[14]:
2x2 Array{Float64,2}:
 0.0  0.0
 0.0  0.0

Array comprehensions

Syntax:

[ F(x) for x=... ]

or

[ F(x,y) for y=..., x=... ]

and likewise for higher dimensions.


In [15]:
C = [ sin(i)+cos(j) for j=-5:5, i=-10:10 ]


Out[15]:
11x21 Array{Float64,2}:
  0.827683  -0.128456  -0.705696   -0.373324  …   0.695781    -0.260359  
 -0.109623  -1.06576   -1.643      -1.31063      -0.241525    -1.19766   
 -0.445971  -1.40211   -1.97935    -1.64698      -0.577874    -1.53401   
  0.127874  -0.828265  -1.40551    -1.07313      -0.00402835  -0.960168  
  1.08432    0.128184  -0.449056   -0.116684      0.952421    -0.00371881
  1.54402    0.587882   0.0106418   0.343013  …   1.41212      0.455979  
  1.08432    0.128184  -0.449056   -0.116684      0.952421    -0.00371881
  0.127874  -0.828265  -1.40551    -1.07313      -0.00402835  -0.960168  
 -0.445971  -1.40211   -1.97935    -1.64698      -0.577874    -1.53401   
 -0.109623  -1.06576   -1.643      -1.31063      -0.241525    -1.19766   
  0.827683  -0.128456  -0.705696   -0.373324  …   0.695781    -0.260359  

Concatenation


In [16]:
B1 = ones(2,3)


Out[16]:
2x3 Array{Float64,2}:
 1.0  1.0  1.0
 1.0  1.0  1.0

In [17]:
B2 = 200*ones(2,3)


Out[17]:
2x3 Array{Float64,2}:
 200.0  200.0  200.0
 200.0  200.0  200.0

Build matrix by blocks:


In [18]:
[B1 -B2; -B1 B2]


Out[18]:
4x6 Array{Float64,2}:
  1.0   1.0   1.0  -200.0  -200.0  -200.0
  1.0   1.0   1.0  -200.0  -200.0  -200.0
 -1.0  -1.0  -1.0   200.0   200.0   200.0
 -1.0  -1.0  -1.0   200.0   200.0   200.0

Indexing

$$\large X = A[I_1, I_2, I_3, \dots, I_n]$$

where each $I_k$ may be:

  • A scalar integer
  • A range (a:b or a:s:b)
  • : (a colon)
  • A vector of integers
  • A vector of booleans

In [19]:
A = [1 2 3; 4 5 6; 7 8 9]


Out[19]:
3x3 Array{Int64,2}:
 1  2  3
 4  5  6
 7  8  9

In [20]:
A[2,3]


Out[20]:
6

In [21]:
A[1:2,2]


Out[21]:
2-element Array{Int64,1}:
 2
 5

In [22]:
A[2,:]


Out[22]:
1x3 Array{Int64,2}:
 4  5  6

In [23]:
A[[3, 2], :]


Out[23]:
2x3 Array{Int64,2}:
 7  8  9
 4  5  6

In [24]:
A[:, [false true true]]


Out[24]:
3x2 Array{Int64,2}:
 2  3
 5  6
 8  9

Maximum and minimum

Elementwise:

min(A, B)
max(A, B)

In [25]:
B = rand(1:10,size(A))


Out[25]:
3x3 Array{Int64,2}:
 5   6  1
 1   9  9
 2  10  3

In [26]:
max(A,B)


Out[26]:
3x3 Array{Int64,2}:
 5   6  3
 4   9  9
 7  10  9

Find the minimum/maximum in an array:

minimum(A, dims)
maximum(A, dims)

In [27]:
maximum(B,1)


Out[27]:
1x3 Array{Int64,2}:
 5  10  9

In [28]:
maximum(B,2)


Out[28]:
3x1 Array{Int64,2}:
  6
  9
 10

Arithmetic


In [29]:
A = rand(1:10,3,3)


Out[29]:
3x3 Array{Int64,2}:
 8  10  1
 5   8  2
 4  10  6

In [30]:
A * eye(A)


Out[30]:
3x3 Array{Int64,2}:
 8  10  1
 5   8  2
 4  10  6

Elementwise operations

Putting a . in front of the operator will apply it elementwise.


In [31]:
A .* eye(A)


Out[31]:
3x3 Array{Int64,2}:
 8  0  0
 0  8  0
 0  0  6

Let


In [32]:
A = rand(0:1,3,3)


Out[32]:
3x3 Array{Int64,2}:
 1  1  0
 1  1  1
 0  0  1

In [33]:
B = rand(0:1,3,3)


Out[33]:
3x3 Array{Int64,2}:
 1  1  1
 0  1  1
 1  0  1

And compare


In [34]:
A==B


Out[34]:
false

with


In [35]:
A.==B


Out[35]:
3x3 BitArray{2}:
  true  true  false
 false  true   true
 false  true   true

Broadcasting

Broadcasting extends the idea of working elementwise by expanding singleton dimensions.


In [36]:
a = [1 2 3]


Out[36]:
1x3 Array{Int64,2}:
 1  2  3

In [37]:
b = [1,3,5]


Out[37]:
3-element Array{Int64,1}:
 1
 3
 5

In [38]:
a+b


LoadError: DimensionMismatch("dimensions must match")
while loading In[38], in expression starting on line 1

 in promote_shape at operators.jl:211

In [39]:
a.+b


Out[39]:
3x3 Array{Int64,2}:
 2  3  4
 4  5  6
 6  7  8

In [40]:
a.>=b


Out[40]:
3x3 BitArray{2}:
  true   true   true
 false  false   true
 false  false  false

In summary, the clear distinction between normal operators

  • +, -, *, /, \, ^, ==, etc.

and broadcasting operators

  • .+, .-, .*, ./, .\, .^, .==, etc.

makes the code easier to read and less prone to unexpected side effects.

Types

Why are types important in Julia?

  1. To get fast code. The processor has to know the type, and determining the type at runtime is very slow.
  2. For multiple dispatch.

Syntax

Read :: as "is an instance of".


In [41]:
1+2::Integer


Out[41]:
3

In [42]:
1+2::AbstractFloat


LoadError: TypeError: typeassert: expected AbstractFloat, got Int64
while loading In[42], in expression starting on line 1

Most commonly used for function arguments.


In [43]:
f(a, x::Number) = length(a)*x


Out[43]:
f (generic function with 1 method)

In [44]:
f("abc",0.5)


Out[44]:
1.5

In [45]:
f([1,2,3],2)


Out[45]:
6

In [46]:
f("abc","def")


LoadError: MethodError: `f` has no method matching f(::ASCIIString, ::ASCIIString)
Closest candidates are:
  f(::Any, !Matched::Number)
while loading In[46], in expression starting on line 1

Julia will compile different versions of the function f for every combination of arguments that is encountered!

This minimizes the (slow) type inference that the program needs to do at runtime.


In [47]:
typeof(f("abc",0.5))


Out[47]:
Float64

In [48]:
typeof(f("abc",2))


Out[48]:
Int64

Example, simple for loops

Compare

for i=1:5
    print(i)
end

and

for i=[1 2 3 4 5]
    print(i)
end

The types are different


In [49]:
typeof(1:5)


Out[49]:
UnitRange{Int64}

In [50]:
typeof([1 2 3 4 5])


Out[50]:
Array{Int64,2}

In [51]:
rng = 1:5


Out[51]:
1:5

In [52]:
length(rng)


Out[52]:
5

In [53]:
rng[2]


Out[53]:
2

The range object behaves (mostly) like an array, but it represents a the common abstraction of arrays that are lists of consecutive integers. This improves performance while giving readable code.

Multiple dispatch is used by Julia to achieve this behavior.

Type Stability

Code is said to be type-stable if variables do not change type.

A function is said to be type-stable if the return type can be deduced from the input types.

A motivating example:


In [54]:
function array_sum(a)
    s = 0
    for x in a
        s += x
    end
    s
end


Out[54]:
array_sum (generic function with 1 method)

In [55]:
myArray = randn(100000000);

In [56]:
array_sum(myArray)


Out[56]:
7958.893068520493

In [57]:
@time array_sum(myArray)


 
Out[57]:
7958.893068520493
 4.185211 seconds (200.00 M allocations: 2.980 GB, 4.28% gc time)

In [58]:
function array_sum2(a)
    s = zero(eltype(a))
    for x in a
        s += x
    end
    s
end


Out[58]:
array_sum2 (generic function with 1 method)

In [59]:
array_sum2(myArray)


Out[59]:
7958.893068520493

In [60]:
@time array_sum2(myArray)


 
Out[60]:
7958.893068520493
 0.180536 seconds (5 allocations: 176 bytes)

Some examples

Which of the following functions are type-stable?


In [61]:
e(x) = x<0 ? 0 : 1
f(x) = x<0 ? -x : x
g(x) = x==0 ? 1 : sin(x)/x
h(x,y) = x<y ? x : y


Out[61]:
h (generic function with 1 method)

In [62]:
y = e(-1)
y, typeof(y)


Out[62]:
(0,Int64)

In [63]:
y = e(1.0)
y, typeof(y)


Out[63]:
(1,Int64)

In [64]:
typeof(f(-3)), typeof(f(3))


Out[64]:
(Int64,Int64)

In [65]:
y = g(1)
y, typeof(y)


Out[65]:
(0.8414709848078965,Float64)

In [66]:
y = g(1.0)
y, typeof(y)


Out[66]:
(0.8414709848078965,Float64)

In [67]:
y = g(0)
y, typeof(y)


Out[67]:
(1,Int64)

In [68]:
y = h(1,2.0)
y, typeof(y)


Out[68]:
(1,Int64)

In [69]:
y = h(2,1.0)
y, typeof(y)


Out[69]:
(1.0,Float64)

Why should we write type-stable functions?

When a type-stable function is called, the compiler will know the return type and can avoid type checks. It also helps the outer function becoming type-stable.

Side effects of type stability

This works.


In [70]:
2^3


Out[70]:
8

But this doesn't.


In [71]:
2^-3


LoadError: DomainError:
Cannot raise an integer x to a negative power -n. 
Make x a float by adding a zero decimal (e.g. 2.0^-n instead of 2^-n), or write 1/x^n, float(x)^-n, or (x//1)^-n.
while loading In[71], in expression starting on line 1

 in power_by_squaring at intfuncs.jl:82

However, we can make it work:


In [72]:
2.0^-3


Out[72]:
0.125

In [73]:
float(2)^-3


Out[73]:
0.125

In [74]:
2^-3.0


Out[74]:
0.125

To make 2^-3 return 0.125, either

  • ^ would not be type-stable.

or

  • ^ would always return a float (for integer arguments).

Similar behavior for $\sqrt{}$.


In [75]:
sqrt(4)


Out[75]:
2.0

In [76]:
sqrt(-1)


LoadError: DomainError:
sqrt will only return a complex result if called with a complex argument. Try sqrt(complex(x)).
while loading In[76], in expression starting on line 1

 in sqrt at math.jl:146

In [77]:
sqrt(-1 + 0im)


Out[77]:
0.0 + 1.0im

In [78]:
sqrt(complex(-1))


Out[78]:
0.0 + 1.0im

This can also be helpful to track down bugs.

It's better to get an error message than getting unexpected complex numbers later in the program!

More on type-stability: https://www.youtube.com/watch?list=PLP8iPy9hna6Sdx4soiGrSefrmOPdUWixM&v=L0rx_Id8EKQ

Abstract Types

Julia has a type hierarchy. Only leaf types are concrete. (As opposed to object-oritented languages like C++ or Java.)

Read <: as "is a subtype of"

abstract Number
abstract Real     <: Number
abstract AbstractFloat <: Real
abstract Integer  <: Real
abstract Signed   <: Integer
abstract Unsigned <: Integer

Useful when defining functions.


In [79]:
square(x::Number) = x*x


Out[79]:
square (generic function with 1 method)

In [80]:
square(2)


Out[80]:
4

In [81]:
square(1.5)


Out[81]:
2.25

In [82]:
square("abc")


LoadError: MethodError: `square` has no method matching square(::ASCIIString)
while loading In[82], in expression starting on line 1

Composite types

We can define our own types. They will be just as fast as the built-in types in Julia (if implemented correctly).


In [83]:
type Person
    name::AbstractString
    age::Int
end

Specifying the types of the fields is not necessary, but can improve performance for the same reasons as we have seen above.


In [84]:
p = Person("Anna Svensson", 42)


Out[84]:
Person("Anna Svensson",42)

In [85]:
p.name


Out[85]:
"Anna Svensson"

In [86]:
p.age


Out[86]:
42

Immutable types

Many simple types in Julia are immutable. It means that they are not allowed to be modified after construction. It greatly helps the compiler create fast code in many cases.

Pass-by-sharing

Julia uses pass-by-sharing for function arguments.


In [87]:
a = [1,2,3,4]


Out[87]:
4-element Array{Int64,1}:
 1
 2
 3
 4

In [88]:
function dostuff!(v) 
    v[1] = 100
end


Out[88]:
dostuff! (generic function with 1 method)

In [89]:
dostuff!(a)


Out[89]:
100

In [90]:
a


Out[90]:
4-element Array{Int64,1}:
 100
   2
   3
   4

However


In [91]:
function dostuff2(v)
    v = -v
end


Out[91]:
dostuff2 (generic function with 1 method)

In [92]:
dostuff2(a)


Out[92]:
4-element Array{Int64,1}:
 -100
   -2
   -3
   -4

In [93]:
a


Out[93]:
4-element Array{Int64,1}:
 100
   2
   3
   4

Here -v creates a new array and the variable v is rebound to the new array. Hence we do not change the original array a.

What can we do with types?

Modulo Arithmetic example


In [94]:
import Base: +, -, *

Definition of type.


In [95]:
immutable ModInt{n} <: Integer
    k::Int
    ModInt(k) = new(mod(k,n))
end

Arithmetic operations.


In [96]:
-{n}(a::ModInt{n}) = ModInt{n}(-a.k)
+{n}(a::ModInt{n}, b::ModInt{n}) = ModInt{n}(a.k+b.k)
-{n}(a::ModInt{n}, b::ModInt{n}) = ModInt{n}(a.k-b.k)
*{n}(a::ModInt{n}, b::ModInt{n}) = ModInt{n}(a.k*b.k)


Out[96]:
* (generic function with 139 methods)

Conversion and promotions (more on these later).


In [97]:
Base.convert{n}(::Type{ModInt{n}}, i::Int) = ModInt{n}(i)
Base.promote_rule{n}(::Type{ModInt{n}}, ::Type{Int}) = ModInt{n}


Out[97]:
promote_rule (generic function with 125 methods)

How to print it.


In [98]:
Base.show{n}(io::IO, k::ModInt{n}) = print(io, "$(k.k) mod $n")
Base.showcompact(io::IO, k::ModInt) = print(io, k.k)


Out[98]:
showcompact (generic function with 8 methods)

Inversion.


In [99]:
Base.inv{n}(a::ModInt{n}) = ModInt{n}(invmod(a.k, n))


Out[99]:
inv (generic function with 29 methods)

Usage:


In [100]:
a = ModInt{11}(120)


Out[100]:
10 mod 11

In [101]:
b = ModInt{11}(987)


Out[101]:
8 mod 11

In [102]:
a+b


Out[102]:
7 mod 11

In [103]:
a+2


Out[103]:
1 mod 11

Since a lot of functionality is built on top of the definitions above, we can do many more things!

Create matrices:


In [104]:
A = map(ModInt{13}, rand(1:1000,5,5))


Out[104]:
5x5 Array{ModInt{13},2}:
  7   6  10   4  0
  1  11   1  12  9
  8   9   7   8  1
 11   2   4   9  6
  7  10   2   1  0

In [105]:
B = map(ModInt{13}, rand(1:1000,5,5))


Out[105]:
5x5 Array{ModInt{13},2}:
 0   3   5  8   2
 5   1   9  7   5
 6  11  12  7   1
 3   6   4  9  12
 2   1   7  6   6

Operate on matrices:


In [106]:
A+B


Out[106]:
5x5 Array{ModInt{13},2}:
 7   9   2  12  2
 6  12  10   6  1
 1   7   6   2  2
 1   8   8   5  5
 9  11   9   7  6

In [107]:
A*B


Out[107]:
5x5 Array{ModInt{13},2}:
 11  5   4  9  11
 11  2   6  7   9
  9  3  10  7   1
  8  9   4  0  11
  0  7  10  6   0

In [108]:
A.*B


Out[108]:
5x5 Array{ModInt{13},2}:
 0   5  11  6  0
 5  11   9  6  6
 9   8   6  4  1
 7  12   3  3  7
 1  10   1  6  0

In [109]:
2A^3 - 3B + 2I


Out[109]:
5x5 Array{ModInt{13},2}:
  4   4   5  8  1
 12   1   2  6  5
 12   5   8  1  4
  6  11  10  1  6
  7  12   4  4  9

We can also use common functions:


In [110]:
sum(A)


Out[110]:
3 mod 13

In [111]:
cumsum(A[1,:])


Out[111]:
1x5 Array{ModInt{13},2}:
 7  6  10  4  0

Remember that we think of functions as ideas or protocols.

In 14 lines of code, we got a new type that is already very useful!

Conversion and Promotion

There are rules for converting and promoting types in many languages. The difference is that in Julia this is exposed to the programmer.


In [112]:
1 + 3.2


Out[112]:
4.2

Let's break down what Julia is doing here.

+ is defined between for Int+Int and for Float64+Float64. And then there is this general definition for Numbers (that can be of different types):

+(x::Number, y::Number) = +(promote(x,y)...)

which is invoked above.

The function promote converts the arguments to a common type:


In [113]:
promote(1,3.2)


Out[113]:
(1.0,3.2)

Which in turn relies on


In [114]:
promote_type(Int,Float64)


Out[114]:
Float64

to find the common type and


In [115]:
convert(Float64,1)


Out[115]:
1.0

to do the conversion.

For our own types, we need to define

convert()

and

promote_rule()

as in the ModInt example above.

Some more examples

Matrix factorizations

Let's make a positive definite symmetric matrix.


In [116]:
N = 5


Out[116]:
5

In [117]:
A = randn(N,N)
A = A'A


Out[117]:
5x5 Array{Float64,2}:
  6.46199   -3.0187      0.738627   0.553245   -0.708259 
 -3.0187     4.53186     4.32275   -2.40945    -0.0890729
  0.738627   4.32275     7.49333   -3.12707    -0.626125 
  0.553245  -2.40945    -3.12707    1.55066    -0.0107372
 -0.708259  -0.0890729  -0.626125  -0.0107372   3.21402  

And compute the Cholesky factorization. (Every positive definite symmetric matrix $A$ has a decomposition $A=U^TU$.)


In [118]:
C = cholfact(A)


Base.LinAlg.Cholesky{Float64,Array{Float64,2}} with factor:
Out[118]:
5x5 UpperTriangular{Float64,Array{Float64,2}}:
 2.54204  -1.18751  0.290564   0.217638   -0.278618 
 0.0       1.76683  2.64191   -1.21743    -0.237677 
 0.0       0.0      0.655154   0.0397522   0.126308 
 0.0       0.0      0.0        0.139877   -1.74779  
 0.0       0.0      0.0        0.0         0.0957394

Note that variable $C$ represents the decomposition $U^TU$.


In [119]:
Y = randn(N)


Out[119]:
5-element Array{Float64,1}:
 -0.885421
  0.686463
  0.157924
 -0.8685  
 -0.883918

Now let's solve for $X$ in $AX=Y$.


In [120]:
A\Y


Out[120]:
5-element Array{Float64,1}:
  -3562.79 
  -9293.76 
    868.671
 -11424.8  
   -911.898

Now let's solve for $X$ in $U^TUX=Y$.


In [121]:
C\Y


Out[121]:
5-element Array{Float64,1}:
  -3562.79 
  -9293.76 
    868.671
 -11424.8  
   -911.898

This way, we can work with a more efficient representation while keeping the code easy to read.

Conclusions/Remarks

Types are necessary to make the code run fast.

However, the Julia JIT compiler does much of the work for us. Many times, we don't need to specify types in our code and Julia will make it fast anyway.

High-level code doesn't need bother too much with types.

Packages and Julia library functions tend to be well-written and fast. They will be fast even if there are some type problems in your high-level code.

If you write low-level, performance critical code.

Type stability is key.

Types are necessary for Multiple Dispatch.

Which is absoletely central for code abstraction in Julia.

We should be aware of how types work in Julia.

To know when we need to care.

To understand why certain things are designed the way they are.