This notebook shows the usage of BenchmarkLite.jl, a lightweight Julia package for performance benchmarking.
Suppose we want to compare the performance of several math functions (applied in batch to vectors). We can do this in several steps:
Like other packages, one can load a package with either import
or using
. Most of the methods in this package are extended from Julia Base. Hence, import
should be good enough in typical cases. However, if you want to access the types like Proc
and BenchmarkTable
more conveniently, you may use using
.
The package is very lightweight. So the package should load very fast.
In [1]:
using BenchmarkLite
All procedures to be benchmarked should be defined as subtypes of Proc
, which is an abstract type defined in the BechmarkLite
module. Several methods need to be defined for a procedure. Each procedure can be run under differen configs.
string(proc)
:
a short name to identify procedure (this will be used when showing the benchmark table)
length(proc, cfg)
:
the size of the problem under certain configuration. For example, if the procedure is to run computation of some function over n
elements, then this function is to return n
.
isvalid(proc, cfg)
:
whether the procedure can be run under the given configuration cfg
.
s = start(proc, cfg)
:
initialize states to support the procedure (e.g. allocating necessary memory, or connecting to a database). This part is not counted in the run-time of the procedure.
run(proc, cfg, s)
:
run the procedure under certain given config (together with initialized states)
done(proc, cfg, s)
:
de-initialize the run-time states (e.g. closes a file or database connection)
Note: all these methods are extended from Julia Base.
Now we define a VecMath
subtype to represent the procedures:
In [2]:
type VecMath{Op} <: Proc end
Here, the type parameter Op
can be Sqrt
, Exp
etc, as we defined below, to represent the calculation we want to perform on each scalar. Using types to represent functions, allow specific computation to be inlined without incurring runtime overheads.
In [3]:
type Sqrt end
calc(::Sqrt, x) = sqrt(x)
type Exp end
calc(::Exp, x) = exp(x)
type Log end
calc(::Log, x) = log(x);
Define procedure names:
In [4]:
Base.string{Op}(::VecMath{Op}) = string("vec-", lowercase("$Op"));
In [5]:
string(VecMath{Sqrt}())
Out[5]:
To preclude the memory allocation time from the benchmark, we need to allocate arrays of specific sizes in advance, and store them as the initialized states. Particularly, we need to vectors, one for input, and the other for output. We use FVecPair
as a shortname to represent such a bi-vector state:
In [6]:
typealias FVecPair (Vector{Float64},Vector{Float64});
The configuration is vector length, which can be simply represented by an integer. Then, we can define the procedures as follows:
In [8]:
Base.length(p::VecMath, n::Int) = n
Base.isvalid(p::VecMath, n::Int) = (n > 0)
Base.start(p::VecMath, n::Int) = (rand(n), zeros(n))
function Base.run{Op}(p::VecMath{Op}, n::Int, s::FVecPair)
x, y = s
op = Op()
for i = 1:n
@inbounds y[i] = calc(op, x[i])
end
end
Base.done(p::VecMath, n, s) = nothing;
Collect all procedures into a Proc
-vector, as
In [9]:
procs = Proc[ VecMath{Sqrt}(),
VecMath{Exp}(),
VecMath{Log}() ];
Collect all configurations into an Int
-vector, as
In [10]:
cfgs = 2 .^ (4:10)
Out[10]:
Now, we call run
to actually run the benchmark. For each procedure under each configuration, there are three stages of running:
warming up:
it runs the procedure under the given configuration once, which triggers the pre-compilation of the function.
probing:
it runs the procedure again to roughly estimate the time needed to run it once. Then the total number of runs is determined such that the entire duration of measuring takes about 1 second. If you want to change this duration, you may set it using the duration
keyword argument. For example, duration = 0.5
means having each procedure under each configuration run for about 0.5 second.
measuring:
it runs the procedure a number of times (the number of times is decided in the probing stage), and records the elapsed time.
In [11]:
rtable = run(procs, cfgs);
The result is stored in an instance of BenchmarkTable
, which can be shown in different units. For example, you can show how many milliseconds each procedure takes (under various configuration):
In [12]:
show(rtable; unit=:msec)
If millisecond is not precise enough, you may try showing in terms of microseconds:
In [14]:
show(rtable; unit=:usec)
Sometimes, you may want to watch the results in terms of speed (e.g., MPS, million numbers per second):
In [15]:
show(rtable; unit=:mps)
Here is a list of supported units:
unit |
description |
---|---|
:sec |
seconds per run |
:msec |
milliseconds per run |
:usec |
microseconds per run |
:nsec |
nanoseconds per run |
:ups |
how many items/numbers per second. note: the number of items per run is determined by length(proc, cfg) |
:kps |
how many thousand items/numbers per second |
:mps |
how many million items/numbers per second |
:gps |
how many trillion items/numbers per seoncd |