Metaprogramming is writing code that writes code.
Inspired by several other languages, notably Scheme, Julia provides built-in facilities for metaprogramming:
Julia provides access to its parser and abstract syntax tree: You can get a symbolic representation of any Julia code.
You can manipulate these symbolic representations to transform and generate Julia code at runtime, and evaluate it to run the resulting code.
Julia provides symbolic macros: these are essentially functions evaluated at parse time which take the syntax tree of the code, perform arbitrary transformations, and insert new code to be later compiled.
Julia macros, inspired by Scheme's hygienic macros, effectively allow you to both extend the syntax of Julia with arbitrary parse-time code generation.
The following, predictably, does not work:
In [1]:
ex = x - 2y
But using :(.....)
or quote .... end
produces a symbolic expression of type Expr
, which contains the parsed syntax tree of a Julia expression.
In [2]:
ex = :(x - 2y)
Out[2]:
The dump
function uses introspection to print the contents of any data structure:
In [3]:
dump(ex)
Essentially functions evaluated at parse-time, which take a symbolic expression as input and produce another expression as output, which is inserted into the code before compilation:
A simple macro example: reverse the order of function arguments
In [4]:
macro reverse(ex)
if isa(ex, Expr) && ex.head == :call
return Expr(:call, ex.args[1], reverse(ex.args[2:end])...)
else
return ex
end
end
Out[4]:
In [5]:
# equivalent to 4 - 1
@reverse 1 - 4
Out[5]:
In [6]:
macro horner(x, c...)
ex = esc(c[end])
for i = length(c)-1:-1:1
ex = :($(esc(c[i])) + t * $ex)
end
return Expr(:block, :(t = $(esc(x))), ex)
end
Out[6]:
Fast inline polynomial evaluation is very useful for special functions. For example, evaluating the inverse $\mathrm{erf}^{-1}(x)$ of the error function:
via rational approximants (ratios of polynomials) [Blair (1976)]:
In [7]:
function my_erfinv(x::Float32) # specialized for single-precision args
a = abs(x)
if a >= 1.0f0
if x == 1.0f0
return inf(Float32)
elseif x == -1.0f0
return -inf(Float32)
end
throw(DomainError())
elseif a <= 0.75f0 # Table 10 in Blair et al.
t = x*x - 0.5625f0
return x * @horner(t, -0.13095_99674_22f2,
0.26785_22576_0f2,
-0.92890_57365f1) /
@horner(t, -0.12074_94262_97f2,
0.30960_61452_9f2,
-0.17149_97799_1f2,
0.1f1)
elseif a <= 0.9375f0 # Table 29 in Blair et al.
t = x*x - 0.87890625f0
return x * @horner(t, -0.12402_56522_1f0,
0.10688_05957_4f1,
-0.19594_55607_8f1,
0.42305_81357f0) /
@horner(t, -0.88276_97997f-1,
0.89007_43359f0,
-0.21757_03119_6f1,
0.1f1)
else # Table 50 in Blair et al.
t = 1.0f0 / sqrt(-log(1.0f0 - a))
return @horner(t, 0.15504_70003_116f0,
0.13827_19649_631f1,
0.69096_93488_87f0,
-0.11280_81391_617f1,
0.68054_42468_25f0,
-0.16444_15679_1f0) /
(copysign(t, x) *
@horner(t, 0.15502_48498_22f0,
0.13852_28141_995f1,
0.1f1))
end
end
@vectorize_1arg Real my_erfinv
Out[7]:
This is precisely how erfinv
is implemented in Julia (in single and double precision), and is 3–4× faster than the compiled (Fortran?) code in Matlab, and 2–3× faster than the compiled (Fortran Cephes) code used in SciPy.
The difference (at least in Cephes) seems to be mainly that they have explicit arrays of polynomial coefficients and call a subroutine for Horner's rule, versus inlining it via a macro.
In [8]:
@time erfinv(x);
using PyCall
@pyimport scipy.special as s
x = rand(10^7);
@time s.erfinv(x);
norm(erfinv(x) - s.erfinv(x)) / norm(erfinv(x))