An iJulia notebook

This notebook is meant to run with a Julia kernel, not a python kernel. Follow the instructions in the Faster_than_python.ipynb notebook on how to install Julia and setup the Jupyter to work with it.

Everything is similar to Python.


In [22]:
3+4*6


Out[22]:
27

In [23]:
b = 56*32
b


Out[23]:
1792

We can import python modules, and in particular numpy:


In [4]:
using PyCall
@pyimport numpy as np

In [39]:
function sum2d(arr)
    M = size(arr, 1)
    N = size(arr, 2)
    result = 0.0
    for i=1:M
        for j=1:N
            result += arr[i,j]
        end
    end
    return result
end
# We also try a version where the dimensions are transversed in the reverse order. 
# We will test which of these two version is more efficient.
function sum2d_inv(arr)
    M = size(arr, 1)
    N = size(arr, 2)
    result = 0.0
    for i=1:N #A change here
        for j=1:M #A change here
            result += arr[j,i] #A change here
        end
    end
    return result
end


Out[39]:
sum2d_inv (generic function with 1 method)

In [29]:
a=[1:9999*999]
a=reshape(a,9999,999)


Out[29]:
9999x999 Array{Int64,2}:
    1  10000  19999  29998  39997  …  9949006  9959005  9969004  9979003
    2  10001  20000  29999  39998     9949007  9959006  9969005  9979004
    3  10002  20001  30000  39999     9949008  9959007  9969006  9979005
    4  10003  20002  30001  40000     9949009  9959008  9969007  9979006
    5  10004  20003  30002  40001     9949010  9959009  9969008  9979007
    6  10005  20004  30003  40002  …  9949011  9959010  9969009  9979008
    7  10006  20005  30004  40003     9949012  9959011  9969010  9979009
    8  10007  20006  30005  40004     9949013  9959012  9969011  9979010
    9  10008  20007  30006  40005     9949014  9959013  9969012  9979011
   10  10009  20008  30007  40006     9949015  9959014  9969013  9979012
   11  10010  20009  30008  40007  …  9949016  9959015  9969014  9979013
   12  10011  20010  30009  40008     9949017  9959016  9969015  9979014
   13  10012  20011  30010  40009     9949018  9959017  9969016  9979015
    ⋮                              ⋱        ⋮                           
 9988  19987  29986  39985  49984     9958993  9968992  9978991  9988990
 9989  19988  29987  39986  49985     9958994  9968993  9978992  9988991
 9990  19989  29988  39987  49986     9958995  9968994  9978993  9988992
 9991  19990  29989  39988  49987  …  9958996  9968995  9978994  9988993
 9992  19991  29990  39989  49988     9958997  9968996  9978995  9988994
 9993  19992  29991  39990  49989     9958998  9968997  9978996  9988995
 9994  19993  29992  39991  49990     9958999  9968998  9978997  9988996
 9995  19994  29993  39992  49991     9959000  9968999  9978998  9988997
 9996  19995  29994  39993  49992  …  9959001  9969000  9978999  9988998
 9997  19996  29995  39994  49993     9959002  9969001  9979000  9988999
 9998  19997  29996  39995  49994     9959003  9969002  9979001  9989000
 9999  19998  29997  39996  49995     9959004  9969003  9979002  9989001

Now we can time the function we have written. Remark that the first time we executed it gets compiled so it takes much longer. But from then on, the compiled version is used and execution is much faster.


In [37]:
@time sum2d(a)


elapsed time: 0.31893546 seconds (162120 bytes allocated)
Out[37]:
4.9890075483501e13

In [38]:
@time sum2d(a)


elapsed time: 0.072591354 seconds (96 bytes allocated)
Out[38]:
4.9890075483501e13

In [40]:
@time sum2d_inv(a)


elapsed time: 0.302777959 seconds (162296 bytes allocated)
Out[40]:
4.9890075483501e13

In [41]:
@time sum2d_inv(a)


elapsed time: 0.022418074 seconds (96 bytes allocated)
Out[41]:
4.9890075483501e13

As you can see this version is much faster: compare 0.072s with 0.022s. So for large matrices remember it is important to know how they are stored in memory so that they are acessed in this same order. As the elements are contiguous in memory, the processor can be much more efficient when accessing them.


In [ ]: