In [22]:
3+4*6
Out[22]:
In [23]:
b = 56*32
b
Out[23]:
We can import python modules, and in particular numpy:
In [4]:
using PyCall
@pyimport numpy as np
In [39]:
function sum2d(arr)
M = size(arr, 1)
N = size(arr, 2)
result = 0.0
for i=1:M
for j=1:N
result += arr[i,j]
end
end
return result
end
# We also try a version where the dimensions are transversed in the reverse order.
# We will test which of these two version is more efficient.
function sum2d_inv(arr)
M = size(arr, 1)
N = size(arr, 2)
result = 0.0
for i=1:N #A change here
for j=1:M #A change here
result += arr[j,i] #A change here
end
end
return result
end
Out[39]:
In [29]:
a=[1:9999*999]
a=reshape(a,9999,999)
Out[29]:
Now we can time the function we have written. Remark that the first time we executed it gets compiled so it takes much longer. But from then on, the compiled version is used and execution is much faster.
In [37]:
@time sum2d(a)
Out[37]:
In [38]:
@time sum2d(a)
Out[38]:
In [40]:
@time sum2d_inv(a)
Out[40]:
In [41]:
@time sum2d_inv(a)
Out[41]:
As you can see this version is much faster: compare 0.072s with 0.022s. So for large matrices remember it is important to know how they are stored in memory so that they are acessed in this same order. As the elements are contiguous in memory, the processor can be much more efficient when accessing them.
In [ ]: