As a JIT ("Just in Time")-compiled language, Julia is designed for good performance. Currently, it is usually expected that it should usually be able to reach speeds within at most a factor of 2 of that of corresponding C code.
However, to attain decent performance, there are certain principles that must be used in code; see the Performance tips section of the Julia manual for more details.
When profiling, always run each function once with the correct argument types before timing it, since the first time it is run the compilation time will play a large role.
In [1]:
Pkg.update()
In [2]:
@time sin(10)
Out[2]:
In [3]:
a = 3
Out[3]:
Global variables are slow in Julia: do not use global variables!
Your main program should be wrapped in a function. Any time you are tempted to use globals, just send them as arguments to functions, and return them if necessary.
If you have many variables to pass around, wrap them in a type, e.g. called State
The second important idea for gaining performance is that of type stability.
Any calculation will be immediately slowed down by having variables which can change type during a calculation, simply due to the extra work that must be done at run-time to check the type of the variables. (This is one of the main reasons for the slowness of Python and the necessity for type declarations in Cython to gain speed.)
A simple example (due to Leah Hanson) is the following pair of almost-identical functions:
In [4]:
function sum1(N::Int)
total = 0
for i in 1:N
total += i/2
end
total
end
function sum2(N::Int)
total = 0.0
for i in 1:N
total += i/2
end
total
end
Out[4]:
We must first run the functions once each to compile them, before looking at any timings:
In [5]:
sum1(10), sum2(10)
Out[5]:
[Happily, they produce the same result!]
In [6]:
N = 10000000
@time sum1(N)
@time sum2(N)
Out[6]:
The second version is consistently over 10 times faster than the first version, due simply to type stability. It also allocates almost no memory. The first version allocates an enormous amount of memory (in fact, it is allocating and deallocating all the time), and spends a large fraction of its time in garbage collection.
To help with type stability, there are functions zero(x) and one(x) that return the correctly-typed zero and one with the same type as the variable x:
Packages: Lint.jl, TypeCheck.jl
In [2]:
x = 1
zero(x)
Out[2]:
In [7]:
y = 0.5
zero(y)
Out[7]:
In [8]:
x = BigFloat("0.1")
one(x)
Out[8]:
Julia gives us access to basically every step in the compilation process:
In [34]:
code_lowered(sum1, (Int,))
Out[34]:
In [35]:
code_lowered(sum2, (Int,))
Out[35]:
In [36]:
code_typed(sum1, (Int,))
Out[36]:
In [37]:
code_typed(sum2, (Int,))
Out[37]:
In [38]:
code_llvm(sum1, (Int, ))
In [39]:
code_native(sum1, (Int,))
Simple profiling of a function may be achieved using the @time macro
A detailed profile may be obtained using @profile.
A graphical view is available via the ProfileView.jl package.
In [10]:
@profile sum1(10000000)
Out[10]:
In [11]:
f(N) = sum1(N)
Out[11]:
In [12]:
@profile f(10000000)
Out[12]:
In [ ]: