Adapted on previous notebooks/slides prepared by Chase Coleman and Spencer Lyon
Goal of this notebook is not to make you an expert julia programmer -- However, we do hope that it helps you see why julia can be powerful and encourages people to try julia out in their own use cases
These instructions are mostly just a summary of the information included on
https://lectures.quantecon.org/jl/getting_started.html
Please see that page for more details
If you run into any trouble, please raise your hand and ask for help!
Julia comes with an automated package management system.
Packages register themselves with the higher power.
In order to become a registered package, some basic requirements such as tests, description of dependencies, and a descriptive README file.
Installation via Pkg.add("PackageName")
Uninstall via Pkg.rm("PackageName")
Update via Pkg.update()
A list of all registered packages (ordered by how many github star they have) can be found at: https://juliaobserver.com/packages
Packages that we find particularly useful
We have a separate notebook that will install many of these packages -- We will need them for Victoria's presentation. Please open this notebook and click Cell > Run All
In [ ]:
# *1 *2 *3
function bisect(f, a, b, maxit=100, tol::Float64=1e-9)
fa, fb = f(a), f(b)
# *4 *5
for it in 1:maxit
mid = (a + b)/2
fmid = f(mid)
# *6
if abs(fmid) < tol
# *7
return mid
end
if fa*fmid > 0
fa, a = fmid, mid
else
fb, b = fmid, mid
end
end
# *8
error("maximum iterations exceeded")
end # Function
function ... end(..., arg=default_value)(..., arg::Type)for X (in|=) SOMETHING ... endA:B (not dense like Matlab)if CONDITION BLOCK endreturn STUFF (optional, see next example)error(MESSAGE)
In [ ]:
# shorthand function syntax
f(x) = x^2-2
# longer syntax -- equivalent to above
function f2(x)
x^2-2
end
# even longer syntax -- still equivalent to above
function f3(x)
return x^2-2
end
In [ ]:
println(bisect(f, -5.0, 2.0))
print(bisect(f2, -5.0, 2.0))
println(bisect(f3, -5.0, 2.0))
In [ ]:
for obj in [1.0, 1, "foo", "φ", Int8(4), Float64, true]
# Notice string interpolation syntax `$`
println("$obj is a $(typeof(obj))")
end
In [ ]:
A = [0, 1, 2, 3]
A
What does the {Int64,1} stand for?
Type parameters are most useful for writing generic code -- For now, it is enough to understand what they are and you will find more uses for your own code as your julia code becomes more "julian"
Now that we understand how to specialize functions on the types of its arguments, it is important to learn how to create our own types.
You should frequently define your own types -- Julia's JIT compilation helps ensure that user defined types will have same performance as the base types.
Example using stochastic processes below
In [ ]:
abstract type Exog end
"""
This type holds a Markov chain which consists of
* stochastic matrix (Π)
* state values (vals)
* intial distribution (x0)
"""
struct MarkovChain{T} <: Exog
Π::Matrix{Float64}
vals::Vector{T}
x0::Vector{Float64}
end
# functions to give `vals` and `x0` default arguments
MarkovChain(Π, v) = MarkovChain(Π, v, fill(1/length(v), length(v)))
MarkovChain(Π) = MarkovChain(Π, collect(1:size(Π, 1)))
In [ ]:
P = [0.5 0.5; 0.5 0.5]
x0 = [0.5, 0.5]
mc1 = MarkovChain(P, ["Good", "Bad"], x0)
mc2 = MarkovChain(P, [1, 2], x0)
In [ ]:
MarkovChain(P)
In [ ]:
mc1
In [ ]:
mc2
In [ ]:
println(typeof(1.0))
println(supertype(typeof(1.0)))
println(supertype(supertype(typeof(1.0))))
println(supertype(supertype(supertype(typeof(1.0)))))
In [ ]:
println(typeof(1))
println(supertype(typeof(1)))
println(supertype(supertype(typeof(1))))
println(supertype(supertype(supertype(typeof(1)))))
println(supertype(supertype(supertype(supertype(typeof(1))))))
In [ ]:
println(typeof(1//2))
println(supertype(typeof(1//2)))
println(supertype(supertype(typeof(1//2))))
"""
This type holds parameters for an AR(1) of the form
y_{t+1} = rho * y_{t} + sigma * eps
where eps ~ N(0, 1)
"""
struct AR1 <: Exog
rho::Float64 # you should put types on the fields of your types
sigma::Float64
end
One of the core benefits of an object oriented programming language is single dispatch (the ability to specialize functions based on their first argument).
Julia (and some other very recent languages) extend this concept to all arguments of a function.
Easiest to understand this by seeing it in action.
In [ ]:
g(x) = "I have something"
g(x::Int) = "I have an integer"
g(x::Float64) = "I have a float"
g(x::Number) = "I have some kind of number"
g(x::Array) = "I have an array"
for x in ("hello", 1, 1.0, 1//2, [1, 2, 3])
@printf "%-12s%s\n" "g($x)" g(x)
end
In [ ]:
g(x, y) = "I have two things"
g(x::Int, y) = "I have an integer and something else"
g(x::Int, y::Number) = "I have an integer and a number"
g(x::Int, y::Int) = "I have two integers"
g(x::Array, y::Array) = "I have two arrays"
g(x::Array{Float64}, y::Array{Float64}) = "I have two arrays that have floats"
In [ ]:
stuff = (("x", "y"), (1, "x"), (1, 1//2),
(1, 2), (1, 2.0), ([1], [2]),
([1.0], [2.0]))
for (x1, x2) in stuff
@printf "%-18s%s\n" "g($x1, $x2)" g(x1, x2)
end
In [ ]:
using Distributions
In [ ]:
rv_gamma = Gamma(2.0, 2.0)
In [ ]:
rv_gamma
In [ ]:
srand(42)
In [ ]:
rand()
In [ ]:
rand(rv_gamma)
In [ ]:
rand(5)
In [ ]:
rand(rv_gamma, 5)
In [ ]:
function simulate(rho::Float64, d::Distribution; capT=500)
x = Array{Float64}(capT)
x[1] = 0.0
for t in 2:capT
x[t] = rho*x[t-1] + rand(d)
end
return x
end
In [ ]:
simulate(0.9, Normal(0.0, 0.1))
In [ ]:
simulate(0.9, Beta(4.0, 7.0))
In [ ]:
function iter(mc::MarkovChain, s::Int)
ind = searchsortedfirst(cumsum(vec(mc.Π[s, :])), rand())
return mc.vals[ind]
end
iter{T}(mc::MarkovChain{T}, v::T) = iter(mc, findfirst(mc.vals, v))
iter(::Exog, x) = error("iter should be implemented by each Exog subtype")
iter(ar::AR1, x) = ar.rho*x + ar.sigma*randn()
iter(ar1::AR1, x) = ar1.rho*x + ar1.sigma*randn()
In [ ]:
state_type(ar1::AR1) = Float64
state_type(mc::MarkovChain) = eltype(mc.vals)
function simulate{T<:Exog}(s::T, x0; capT=1000)
# Allocate memory
out = Array{state_type(s)}(capT)
# Simulate
out[1] = x0
for t=2:capT
out[t] = iter(s, out[t-1])
end
return out
end
In [ ]:
ar1 = AR1(0.9, 0.1)
@time simulate(ar1, 0.0; capT=10_000_000);
In [ ]:
simulate(ar1, 0.0; capT=5000)
In [ ]:
mc = MarkovChain([0.75 0.25; 0.25 0.75], [1.0, 2.0])
@time simulate(mc, mc.vals[1]; capT=10_000_000);
In [ ]: