In [ ]:
import networkx as nx
from networkx import bipartite
import matplotlib.pyplot as plt
import nxviz as nv
from custom.load_data import load_university_social_network, load_amazon_reviews
from matplotlib import animation
from IPython.display import HTML
import numpy as np
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
For this notebook, we will specifically see the connection between matrix operations and pathfinding between nodes.
In [ ]:
nodes = list(range(4))
G1 = nx.Graph()
G1.add_nodes_from(nodes)
G1.add_edges_from(zip(nodes, nodes[1:]))
In [ ]:
nx.draw(G1, with_labels=True)
In [ ]:
nv.MatrixPlot(G1).draw()
In [ ]:
A1 = nx.to_numpy_array(G1, nodelist=sorted(G1.nodes()))
A1
One neat result is that if we take the adjacency matrix, and matrix-matrix multiply it against itself ("matrix power 2"), we will get back a new matrix that has interesting properties.
In [ ]:
import numpy as np
# One way of coding this up
np.linalg.matrix_power(A1, 2)
# Another equivalent way, that takes advantage of Python 3.5's matrix multiply operator
A1 @ A1
Firstly, if we look at the off-diagonals of the new matrix, this corresponds to the number of paths of length 2 that exist between those two nodes.
In [ ]:
np.diag(A1 @ A1)
Here, one path of length 2 exists between node 0 and node 2, and one path of length 2 exists between node 1 and node 3.
Secondly, you may notice that the diagonals look like the degree of the nodes. This is a unique property of the 2nd adjacency matrix power: for every node, there are $ d $ degree paths of length two to get back to that same node.
Not convinced? To get from a node and back, that's a path length of 2! :-)
Let's see if the following statment is true: The $ k^{th} $ matrix power of the graph adjacency matrix indicates how many paths of length $ k $ exist between each pair of nodes.
In [ ]:
np.linalg.matrix_power(A1, 3)
Indeed, if we think about it, there is, by definition, no way sequence of graph traversals that will allow us to go back to a node within 3 steps. We will always end up at some neighboring node.
In addition, to get to the neighboring node in 3 steps, there are two ways to go about it:
Or for the case of this chain graph:
In [ ]:
nodes
In [ ]:
G2 = nx.DiGraph()
G2.add_nodes_from(nodes)
G2.add_edges_from(zip(nodes, nodes[1:]))
nx.draw(G2, with_labels=True)
Recall that in a directed graph, the matrix representation is not guaranteed to be symmetric.
In [ ]:
A2 = nx.to_numpy_array(G2)
A2
Let's look at the 2nd matrix power: the number of paths of length 2 between any pair of nodes.
In [ ]:
np.linalg.matrix_power(A2, 2)
We see that there's only one path from node 0 to node 2 of length 2, and one path from node 1 to node 3. If you're not convinced of this, trace it for yourself!
In [ ]:
# Enter your code here.
Now that we've looked at a toy example, let's play around with a real dataset!
This dataset is a residence hall rating dataset. From the source website:
This directed network contains friendship ratings between 217 residents living at a residence hall located on the Australian National University campus. A node represents a person and edges contain ratings of one friend to another.
For the purposes of this exercise, we will treat the edges as if they were unweighted.
In [ ]:
G = load_university_social_network()
In [ ]:
# Enter your code below.
In [ ]:
# Enter your code below.
In [ ]:
# Enter your code below to find shortest path between two nodes.
In [ ]:
# Find out the number of possible shortest nodes between those two nodes.
Message passing on graphs is a fascinating topic to explore. It's a neat way to think about a wide variety of problems, including the spread of infectious disease agents, rumours, and more. As it turns out, there's a direct matrix interpretation of the message passing operation.
To illustrate this more clearly, let's go back to the directed chain graph, G2
.
In [ ]:
nx.draw(G2, with_labels=True)
If we have a message that begins at node 0, and it is only passed to its neighbors, then node 1 is the next one that possess the message. Node 1 then passes it to node 2, and so on, until it reaches node 3.
There are two key ideas to introduce here. Firstly, there is the notion of the "wavefront" of message passing: at the first time step, node 0 is the wavefront, and as time progresses, node 1, 2 and 3 progressively become the wavefront.
Secondly, as the message gets passed, the number of nodes that have seen the message progressively increases.
Let's see how this gets implemented in matrix form.
To represent the data, we start with a vertical array of messages of shape (1, 4)
. Let's use the following conventions:
1
indicates that a node currently has the message.0
indicates that a node currently does not have the message.Since the message starts at node 0, let's put a 1
in that cell of the array, and 0
s elsewhere.
In [ ]:
msg = np.array([1, 0, 0, 0]).reshape(1, 4)
msg
In order to simulate one round of message passing, we matrix multiply the message with the adjacency matrix.
In [ ]:
msg2 = msg @ A2
msg2
The interpretation now is that the message is currently at node 1.
To simulate a second round, we take that result and matrix multiply it against the adjacency matrix again.
In [ ]:
msg3 = msg2 @ A2
msg3
The interpretation now is that the message is currently at node 2.
In [ ]:
# fig, ax = plt.subplots()
def propagate(G, msg, n_frames):
"""
Computes the node values based on propagation.
Intended to be used before or when being passed into the
anim() function (defined below).
:param G: A NetworkX Graph.
:param msg: The initial state of the message.
:returns: A list of 1/0 representing message status at
each node.
"""
# Initialize a list to store message states at each timestep.
# Set a variable `new_msg` to be the initial message state.
# Get the adjacency matrix of the graph G.
# Perform message passing at each time step
for i in range(n_frames):
# Return the message states.
return msg_states
The rest of the matplotlib
animation functions are shown below.
In [ ]:
def update_func(step, nodes, colors):
"""
The update function for each animation time step.
:param step: Passed in from matplotlib's FuncAnimation. Must
be present in the function signature.
:param nodes: Returned from nx.draw_networkx_edges(). Is an
array of colors.
:param colors: A list of pre-computed colors.
"""
nodes.set_array(colors[step].ravel())
return nodes
def anim(G, initial_state, n_frames=4):
colors = propagate(G, initial_state, n_frames)
fig = plt.figure()
pos = {i:(i, i) for i in range(len(G))}
adj = nx.to_numpy_array(G)
pos = nx.kamada_kawai_layout(G)
nodes = nx.draw_networkx_nodes(G, pos=pos, node_color=colors[0].ravel(), node_size=20)
ax = nx.draw_networkx_edges(G, pos)
return animation.FuncAnimation(fig, update_func, frames=range(n_frames), fargs=(nodes, colors))
# Initialize the message
msg = np.zeros(len(G2))
msg[0] = 1
# Animate the graph with message propagation.
HTML(anim(G2, propagate(G2, msg)).to_html5_video())
In [ ]:
msg = ____________
msg[____] = _____
HTML(anim(G, propagate(G, msg), n_frames=5).to_html5_video())
The section on message passing above assumed unipartite graphs, or at least graphs for which messages can be meaningfully passed between nodes.
In this section, we will look at bipartite graphs.
Recall from before the definition of a bipartite graph:
Bipartite graphs have a natural matrix representation, known as the biadjacency matrix. Nodes on one partition are the rows, and nodes on the other partition are the columns.
NetworkX's bipartite
module provides a function for computing the biadjacency matrix of a bipartite graph.
Let's start by looking at a toy bipartite graph, a "customer-product" purchase record graph, with 4 products and 3 customers. The matrix representation might be as follows:
In [ ]:
import numpy as np
# Rows = customers, columns = products, 1 = customer purchased product, 0 = customer did not purchase product.
cp_mat = np.array([[0, 1, 0, 0],
[1, 0, 1, 0],
[1, 1, 1, 1]])
From this "bi-adjacency" matrix, one can compute the projection onto the customers, matrix multiplying the matrix with its transpose.
In [ ]:
c_mat = cp_mat @ cp_mat.T # c_mat means "customer matrix"
c_mat
Pause here and read carefully!
What we get is the connectivity matrix of the customers, based on shared purchases. The diagonals are the degree of the customers in the original graph, i.e. the number of purchases they originally made, and the off-diagonals are the connectivity matrix, based on shared products.
To get the products matrix, we make the transposed matrix the left side of the matrix multiplication.
In [ ]:
p_mat = cp_mat.T @ cp_mat # p_mat means "product matrix"
p_mat
You may now try to convince yourself that the diagonals are the number of times a customer purchased that product, and the off-diagonals are the connectivity matrix of the products, weighted by how similar two customers are.
In the following exercises, you will now play with a customer-product graph from Amazon. This dataset was downloaded from UCSD's Julian McAuley's website, and corresponds to the digital music dataset.
This is a bipartite graph. The two partitions are:
customers
: The customers that were doing the reviews.products
: The music that was being reviewed.In the original dataset (see the original JSON in the datasets/
directory), they are referred to as:
customers
: reviewerID
products
: asin
In [ ]:
G_amzn = load_amazon_reviews()
NetworkX provides nx.bipartite.matrix.biadjacency_matrix()
function that lets you get the biadjacency matrix of a graph object. This returns a scipy.sparse
matrix. Sparse matrices are commonly used to represent graphs, especially large ones, as they take up much less memory.
Read the docs on how to use the biadjacency_matrix()
function.
You probably would want to first define a function that gets all nodes from a partition.
In [ ]:
def get_partition_nodes(G, partition):
"""
A function that returns nodes from one partition.
Assumes that the attribute key that stores the partition information
is 'bipartite'.
"""
# Put your code here.
return _________
In [ ]:
customer_nodes = get_partition_nodes(G_amzn, ________)
mat = _________
In [ ]:
customer_mat = ________
Next, get the diagonals of the customer-customer matrix. Recall here that in customer_mat
, the diagonals correspond to the degree of the customer nodes in the bipartite matrix.
Hint: SciPy sparse matrices provide a .diagonal()
method that returns the diagonal elements.
In [ ]:
# Get the diagonal.
degrees = ___________
Finally, find the index of the customer that has the highest degree.
In [ ]:
cust_idx = ___________
cust_idx
In [ ]:
# Compute the degree of each customer according to the order
# in customer_nodes
cust_degrees = ________________
np.argmax(cust_degrees)
Let's now also compute which two customers are similar, based on shared reviews. To do so involves the following steps:
scipy.sparse.diags(elements)
will construct a sparse diagonal matrix based on the elements inside elements
.
In [ ]:
import scipy.sparse as sp
In [ ]:
# Construct diagonal elements.
customer_diags = __________
# Subtract off-diagonals.
off_diagonals = _________
# Compute index of most similar individuals.
np.unravel_index(np.argmax(off_diagonals), customer_mat.shape)
In [ ]:
from time import time
start = time()
G_cust = nx.bipartite.weighted_projected_graph(G_amzn, customer_nodes)
most_similar_customers = sorted(G_cust.edges(data=True), key=lambda x: x[2]['weight'], reverse=True)[0]
end = time()
print(f'{end - start:.3f} seconds')
print(f'Most similar customers: {most_similar_customers}')
In [ ]:
start = time()
mat = nx.bipartite.matrix.biadjacency_matrix(G_amzn, customer_nodes)
cust_mat = mat @ mat.T
degrees = customer_mat.diagonal()
customer_diags = sp.diags(degrees)
off_diagonals = customer_mat - customer_diags
c1, c2 = np.unravel_index(np.argmax(off_diagonals), customer_mat.shape)
end = time()
print(f'{end - start:.3f} seconds')
print(f'Most similar customers: {customer_nodes[c1]}, {customer_nodes[c2]}, {cust_mat[c1, c2]}')
You may notice that it's much easier to read the "objects" code, but the matrix code way outperforms the object code. This then becomes a great reason to use matrices (even better, sparse matrices)!
In [ ]: