In [1]:
let cachefile = joinpath(Base.LOAD_CACHE_PATH[1], "DataStructures.ji")
isfile(cachefile) && rm(cachefile)
end
tic()
using DataStructures
toc();
In [1]:
# restart kernel
tic()
using DataStructures
toc();
To use in a package, add
isdefined(Base, :__precompile__) && __precompile__()
before the first module in main package source file. If you don't need to support 0.3, then you don't need the isdefined.
Test carefully first, some things like saving pointers in global variables, or using eval don't always work when precompiled.
In [1]:
# (restart kernel again)
# automatically recompiles if package source changes
touch(Pkg.dir("DataStructures","src","DataStructures.jl"))
tic()
using DataStructures
toc();
Code that does a lot of intermediate allocation should be much faster now.
In [2]:
A = map(BigInt, rand(1:10, 50, 50));
@time A^50;
@time A^50;
@time A^50;
Under 0.3:
elapsed time: 2.055229508 seconds (152561404 bytes allocated, 22.03% gc time)
elapsed time: 1.159305999 seconds (117920640 bytes allocated, 31.78% gc time)
elapsed time: 1.305996277 seconds (117920640 bytes allocated, 42.00% gc time)
Arbitrary types can be called like functions, not just their constructors
In [ ]:
UInt(5)
# instead of the now-deprecated
uint(5)
In [3]:
# often used for "functor types" to help with dispatch and inlining, but subject to change again in 0.5
# see Jeff Bezanson's JuliaCon and Aug 2015 BAJU talks
immutable MultiplierType
end
Base.call(::Type{MultiplierType}, a, b) = a * b
map(MultiplierType, 1:5, 6:10)
Out[3]:
@generated (formerly known as staged) functions (https://github.com/JuliaLang/julia/pull/7474)See Jake Bolewski's JuliaCon talk for more details https://www.youtube.com/watch?v=KAN8zbM659o
Conventional macros take expressions as input and return an expression output at parsing time.
Generated functions take the types of the inputs, after type inference, and return an expression.
In [4]:
macro examplemacro(foo)
@show foo
return :(2 * $foo)
end
@examplemacro (5 * 9 + 10);
In [5]:
ans
Out[5]:
In [6]:
@generated function examplegen(foo)
@show foo
if foo == Int
return :(10)
else
return :(15.0)
end
end
examplegen(5);
In [7]:
ans
Out[7]:
In [8]:
examplegen(1.0);
In [9]:
ans
Out[9]:
In [10]:
# generated functions cache their results each time they are called on a given set of input types!
examplegen(1.0) # returns cached result, does not @show Float64 again
Out[10]:
In [11]:
"My awesome new function"
yaydocs(a) = 1.0
Out[11]:
In [12]:
?yaydocs
Out[12]:
AbstractArrays (https://github.com/JuliaLang/julia/issues/7941 and https://github.com/JuliaLang/julia/pull/8432 and https://github.com/JuliaLang/julia/issues/8501 and https://github.com/JuliaLang/julia/pull/10525)@generated functions.eachindex iterator for fast Cartesian indexing.AbstractArray types just need to implement size and one of the following getindex methods, then all other combinations of vector, slice, and other "fancy" indexing are handled automatically.
getindex(::T, ::Int) # if linearindexing(T) == LinearFast()
getindex(::T, ::Int, ::Int, #=...ndims(A) indices...=#) if LinearSlow()
Parametric wrapper around a type with a boolean flag to indicate missing data.
Allows type stable operation in the presence of missing data (see John Myles White's talk).
Tuples of values (a, b, c) now completely separate from types of tuples, Tuple{Int, Float64, ASCIIString}.
Faster implementation, value tuples very useful now as immutable containers.
Noticeable if you use ccall a lot, new Ref{T} syntax for by-reference passing.
Uses embedded C libgit2 library instead of shelling out to command-line git. Keeps much more information in memory, needs less IO for Pkg operations, much faster.
Not user visible, but improves maintainability and internal structure of core Julia-LLVM interfaces.
Extends Base.Test with some features that are currently found in FactCheck.jl (http://github.com/JuliaLang/FactCheck.jl).
Simple single-node parallelism via @threads macro.
Will need a non-default build flag at first, depends on newer version of LLVM.
Needed for multithreading, Cxx.jl, Gallium.jl debugger, etc.
Better runtime performance and vectorization support.
Several patches and more work still needed to maintain debug info, compile time performance, and memory consumption.
Slices as views, transpose type, dropping dimensions indexed with a scalar