This are a demo and instructions for using the graph module, Laplacians for Julia.
In [1]:
using Laplacians
In [2]:
using PyPlot
In [3]:
VERSION
Out[3]:
In [4]:
gr = grid2(4)
Out[4]:
In [5]:
gr = grid2(4,3)
(x,y) = grid2coords(4,3)
p = plotGraph(gr,x,y;dots=false)
Out[5]:
In [6]:
gr = randRegular(15,3)
spy(gr)
Out[6]:
In [7]:
gr = grownGraph(200,2)
spy(gr)
Out[7]:
In [8]:
gr = full(hyperCube(3))
Out[8]:
In [9]:
a = completeBinaryTree(15)
spy(a)
Out[9]:
In [10]:
a0 = completeBinaryTree(3)
a1 = completeBinaryTree(5)
a = productGraph(a0,a1)
[eig(full(a))[1]';
sort(kron(eig(full(a0))[1],ones(5)) + kron(ones(3),eig(full(a1))[1]))']
Out[10]:
In [11]:
spectralDrawing(a)
Out[11]:
The chimeric graphs combine together elementary graphs in strange ways. If you want to really test your code, try it on a couple thousand of the chimeric graphs. We also produce graphs with weights. Here are two.
In [12]:
a = chimera(100,1)
spectralDrawing(a)
Out[12]:
In [13]:
a = wtedChimera(100)
Out[13]:
The following code computes a 30-by-30 grid graph, samples edges with probability 1/2, and then computes the components.
In [14]:
gr = grid2(30);
grs = subsampleEdges(gr,.5);
In [15]:
co = components(grs)
Out[15]:
In [16]:
comps = vecToComps(co)
Out[16]:
In [17]:
(x,y) = grid2coords(30,30)
for i = 1:length(comps)
ind = comps[i]
plotGraph(grs[ind,ind],x[ind],y[ind],rand(3);dots=false,setaxis=false)
end
pm = plot(x,y,marker="o",linestyle="none",color="black",markersize=2)
Out[17]:
We can read and write graphs as ijv lists. These routines just want the adjacency matrix, and only store the upper-triangular portion.
In [18]:
a = wtedChimera(15,2)
writeIJV("aGraph.ijv",a)
run(`cat aGraph.ijv`)
We can, of course, read this back in.
In [19]:
a2 = readIJV("aGraph.ijv")
a2-a
Out[19]:
You can use routines like this to communicate with Matlab. First, you need to tell Matlab where to find Julia. Hopefully it is in your search path. If not, we will try to find it. And, tell Matlab where to find some m-files.
>> cd('~/.julia/v0.4/Laplacians/matlab') >> init >> a = readIJV('aGraph.ijv') a = (8,1) 1.0658 (11,1) 1.0978 (15,1) 0.6688 (3,2) 4.0888 (5,2) 4.6810 (2,3) 4.0888 (14,3) 4.3929 (6,4) 0.6851 (9,4) 1.6401 (15,4) 0.4751 (2,5) 4.6810 (9,5) 3.9026 (14,5) 4.9851 (4,6) 0.6851 (7,6) 2.0197 (14,6) 3.4076 (6,7) 2.0197 (10,7) 1.5522 (13,7) 1.7009 (1,8) 1.0658 (10,8) 1.0896 (12,8) 0.8409 (4,9) 1.6401 (5,9) 3.9026 (12,9) 1.6089 (7,10) 1.5522 (8,10) 1.0896 (1,11) 1.0978 (13,11) 1.2704 (15,11) 0.4290 (8,12) 0.8409 (9,12) 1.6089 (13,12) 0.3975 (7,13) 1.7009 (11,13) 1.2704 (12,13) 0.3975 (3,14) 4.3929 (5,14) 4.9851 (6,14) 3.4076 (1,15) 0.6688 (4,15) 0.4751 (11,15) 0.4290
You can also save graphs from Matlab, and read them back in to Julia.
>> a2 = 2*a; >> writeIJV('a2Graph.ijv',a2);
In [20]:
a2 = readIJV("a2Graph.ijv")
a2 - 2*a
Out[20]:
I will now compare the running time of the shortest paths code that I wrote in Julia against matlab_bgl and my java code. It could be sped up some more, but this proves the point that we can write fast code in Julia. You may notice that we should speed up chimera.
In [21]:
a = wtedChimera(1000000,1);
In [22]:
@time dists, pArray = shortestPaths(a,1)
Out[22]:
Here are the results in Matlab, using matlab_bgl.
>> a = wtedChimera(1000000,1); >> tic; [d,pa] = shortest_paths(a,1); toc Elapsed time is 2.414358 seconds.
This is a very clear win for Julia! But, it turns out that it wasn't an idential comparision: the shortest path code in Julia treats edges weights as reciprocals of distances, whereas the matlab code treats weights as distances. So, go make a fair comparison, we should take the reciprocals in one of them. Let's run 10 times, taking reciprocals in Julia.
In [23]:
time = 0
for i in 1:10
a = wtedChimera(1000000,i);
a.nzval = 1./a.nzval;
tic()
dists, pArray = shortestPaths(a,1)
time = time + toc()
end
println("Total time = ", time)
So that you can check what I'm saying about distances being reciprocals of weights, here are the distances to the first 5 nodes in the last graph. I'll compute the same ones in Matlab.
In [24]:
dists[1:5]
Out[24]:
>> time = 0; >> for i = 1:10 a = wtedChimera(1000000,i); tic; [d,pa] = shortest_paths(a,1); this = toc; fprintf([num2str(this), ' seconds\n']); time = time + this; end 2.3238 seconds 3.3475 seconds 1.6888 seconds 2.9162 seconds 4.4944 seconds 4.7069 seconds 2.6662 seconds 1.7989 seconds 2.7447 seconds 2.9934 seconds >> time time = 29.6808 >> d(1:5) ans = 0 53.3683 75.2115 53.5149 49.1598
We will also compare the running time of mst code. We will see that the Julia code is over twice as fast as the matlab_bgl.
In [25]:
a = wtedChimera(1000000,11)
@time tree = kruskal(a);
>> a = wtedChimera(1000000,11); >> tic; t = kruskal_mst(a); toc Elapsed time is 7.008127 seconds.
By default, this computes the minimum spanning tree. To get the max, do this.
In [27]:
@show sum(triu(tree).nzval)
maxTree = kruskal(a,kind=:max)
@show sum(triu(maxTree).nzval)
Out[27]:
In [28]:
la = lap(grid2(3))
Out[28]:
In [29]:
E = eigs(la, nev = 3, which=:SR)
V = E[2]
Out[29]:
You would think that you should use
E = eigs(la, nev = 3, which=:SM)But, that gives horrible results.
In [30]:
plotGraph(la,V[:,2],V[:,3])
Out[30]:
In [31]:
a = hyperCube(3)
la = lap(a)
E = eigs(la, nev = 3, which=:SR)
V = E[2]
plotGraph(a,V[:,2],V[:,3])
Out[31]:
In [32]:
a = grid2(5)
typeof(a)
Out[32]:
In [33]:
fieldnames(SparseMatrixCSC)
Out[33]:
In [ ]: