Solving allsomeseveral</sup> social data problems with graphs

Tools:

The igraph-python library:


In [ ]:
# Python wrapper for C/C++ graph libraries
# docs at: http://igraph.org/python/doc/python-igraph.pdf
# also exists for C++ and R
import igraph
# Plotting library for igraph graphs
# Note: you'll need cairo (not a python package) to install this, and the install pkg is "py2cairo"
import cairo

Plotting:


In [ ]:
import matplotlib.pyplot as plt
%matplotlib inline

Another open-source option for working with graphs in python in Networkx. I find networkx to be slightly easier, and igraph-python to be much, much faster for computationally intensive algorithms. Also, igraph has more built-in algorithms for handling graphs. For that reason, we're mostly going to use igraph.

The CollectorUtils/network library:


In [ ]:
# Build a conversation from activities: node = activity (comment, tweet, post), edge = reply
#import conversation_builder_igraph
# Build a user-connectivity graph from activities: node = user, edge = reply to another user
#import build_user_graph
# Load pre-generated, pickled user graphs
import pickle

In [ ]:
# A helper function we use later
def histogram(values):
    hist = {}
    for v in values:
        if v in hist.keys():
            hist[v] += 1
        else:
            hist[v] = 1
    return hist

These are the two python packages that I wrote to turn social data into igraph objects. "conversation_builder_igraph" is simply Jeff's "conversation_builder" for igraph (rather than Networkx), and it produces exactly the same graph that "conversation_builder" produces for any publisher. "build_user_graph" produces a graph with 1 edge for 1 activity (as long as the activity is an interaction between users). The resulting igraph has 1 node/user, and 1 edge/activity. It does support multiple edges/ self loops and all of the edges are directed. I'm still addding publishers to "build_user_graph," but right now it works for Disqus data.

More information on how to use these packages for your own data is on datarama in Disqus Tools Overview here:
https://sites.google.com/a/twitter.com/gnip-data-rama/2014-q2---data-blog/disqus-tools-overview

Using igraph:

Important manipulation tools:

Building a graph:


In [ ]:
# Say we are creating a graph from scratch, G
G = igraph.Graph(directed = True)
G.summary()

If you know Networkx, you know that in Networkx a graph is a dict-of-dicts-of-dicts. In other words, order of node-insertion does not matter, nodes have hashable names (but not orders) and nodes (dictionary keys) are a set. In igraph, nodes are a list of Vertex objects, and while they can be assigned names which you can call them by, they are only necessarily uniquely identified by thier index in that list. Vertices must be inserted into the list (that makes up the graph) in order, and if you try to insert a vertex with the same name/attributes twice, you will get two vertices.


In [ ]:
# And we want to add some vertices:
# Add n vertices to the graph
G.add_vertices(n = 1) # G now has 1 node
# or simply call "add_vertex" to add 1 vertex
G.add_vertex() # G now has 2 nodes
# or add a list of vertices by name 
G.add_vertices(["Brian", "Jeff"]) # G now has 4 nodes
# or add one vertex by name
G.add_vertex("Fiona") # G now has 5 nodes
G.summary()

Vertex objects are stored in VertexSeq objects, which are at graph.vs

Edge objects are stored in EdgeSeq objects, whic are at graph.es


In [ ]:
# G.vs = the VertexSeq object of the graph G
# We can modify the attributes of the vertices like this:
G.vs.select(0,1)['name'] = ["Dr. Skippy", "Josh"]
# Or add whole new sets of attributes to the vertices:
G.vs['favorite_dinosaur'] = ["Velociraptor","Argintinosaurus","Buddy","Triceritops","Pterodactyl"]

In [ ]:
# And we can acces those lists of attributes this way:
G.vs['name']

In [ ]:
G.vs['favorite_dinosaur']

Now let's add some edges to our graph. Edges can be added by referencing the node by index or by the 'name' attribute, which is treated specially. Note: You can ONLY add edges to existing nodes. If you try to add an edge connecting to a non-existant node, it will raise a ValueError. Not great.


In [ ]:
# Edges can be added one at a time:
# (and ONLY while adding edges one at a time can you specify attributes in the same call)
G.add_edge(source = 'Dr. Skippy', target = 'Josh', dinosaur_interactions = 1)
# A list at a time (note it's add_edges() now):
G.add_edges([('Brian','Fiona'),('Fiona','Brian'),('Dr. Skippy', 'Jeff')])
# Referencing names OR vertex indices
G.add_edges([(3,0),(3,2),(0,4),(3,4),(3,1)])
# Just as notably, multiple edges are allowed:
G.add_edges([(1,3),(1,3),(1,3)])

In [ ]:
G.summary()

Edge objects are stored in EdgeSeq objects, which are at graph.es


In [ ]:
# G.es = the EdgeSeq object of the graph G
# We can add edge attributes the exact same way we added vertex attributes
G.es['dinosaur_interactions'] = [1,2,3,4,5,6,7,8,9,10,11,12]

In [ ]:
# The vertex attribute 'label' is treated specially by plot--it appears as a label
G.vs['label'] = G.vs['name']
# The optional "layout" arguement specifies how place the nodes
# The Graph method .layout() claculates node placement for the graph based on one of a number of algorithms
igraph.plot(G, bbox = (400,400), layout = G.layout('kk'), edge_width = [x for x in G.es['dinosaur_interactions']])

Manipulating a graph:

The igraph packages provides many, if not most, classic graph-analysis algorithms, but a lot of the time the real social graph will be much more complicated than an analysis method can deal with. Many algorithms break when presented with multiple edges or directed graphs, and we have to simplify the graph before performing analysis. I've found that the best method is to build the most complicated graph I can store, then simplify it in a way that preserves the information I need for a given analysis.

Graph.simplify()

Graph.simplify() modifies the graph in place, so first use H = G.copy() to create a graph that we can mess with without loosing G. The first thing that most analysis tools can't deal with is multiple edges.


In [ ]:
H = G.copy()
H.simplify(multiple = True, combine_edges = {'dinosaur_interactions' : sum})
igraph.plot(H,  bbox = (350,350), layout = H.layout('kk'), edge_width = [x for x in H.es['dinosaur_interactions']])

In [ ]:
# Checkout how .simplify() combined our edge attributes
H.es['dinosaur_interactions']

Graph.to_undirected()

Graph.to_undirected() works much the same way as .simplify(). It has three defualt modes: "collapse" (only a single edge should be created from multiple directed edges), "each" (every edge will be kept, but no arrowheads), "mutual" (one undirected edge for each mutual directed edge pair). Exactly like .simplify(), it then takes an arguement (combine_edges) to specify how to combine edges. combine_edges is a dictionary where each key is an edge attribute and each value is the corresponding function to specify how to combine the values. There are a few default functions (sum, product, mean, median, min, max etc), but one very useful feature of both .to_undirected() and .simplify() is that you can define your own function to result.


In [ ]:
def combine_dinosaur_interactions(*args):
    d = args[0][0] - args[0][1]
    if d < 0:
        d = d*(-1)
    return d

In [ ]:
H = G.copy()
H.to_undirected(mode = "mutual", combine_edges = {'dinosaur_interactions': combine_dinosaur_interactions})
H.es['dinosaur_interactions']

In [ ]:
igraph.plot(H, bbox = (200,200), edge_width = [x for x in G.es['dinosaur_interactions']])

A few classic ideas:

And how to use them in igraph


In [ ]:
# Grab a classic social network graph fom http://nexus.igraph.org/
S = igraph.Nexus.get("UKfaculty")
print igraph.Nexus.info("UKfaculty")

In [ ]:
# The graph summary:
# We can see graph attributes "(g)", vertex attributes "(v)", and edge attributes "(e)"
# As well as "D" -> dicrected
#            "W" -> weighted
# And 81 (nodes) 817 (edges)
S.summary()

In [ ]:
# And a big, flashy, wonderful-but-messy plot
igraph.plot(S, layout = S.layout("kk"))

Connectedness

Social network graphs often demonstrate what is called a small-world structure, meaning that while most users are not directly connected to one another they are usually within a few steps of eachother. In a proper small-world network, the average path length is $L \propto log(N)$.

Properties of these kinds of real social networks are:

  • A "giant" connected component (the vast majority of the graph belongs to one connected component)
  • Short path lengths between people in the giant component
  • Short average path length in the giant component

Reading: Collective dynamics of small-world networks by S. Strogatz and D. Watts


In [ ]:
# Returns a VertexClustering object, which is a list-like structure of lists of node indices
# mode = 'STRONG' says only mutually connected nodes (in a directed graph) are directed
# mode = 'WEAK' treats connectivity as un-directed
S_clusters = S.clusters(mode = 'STRONG')
cluster_sizes = sorted([len(cluster) for cluster in S_clusters], reverse = True)

In [ ]:
# A list of the sizes of the clusters. One giant component + a few disconnected nodes
cluster_sizes

In [ ]:
# We see that there is a giant component, and if that is the only subgraph we care about
S_giant = S_clusters.giant() # gives us the giant component as its own graph.

We said that another important measure of small-world-ness is a relatively short distance between nodes. One way to measure this is the eccentricity of each node. From the igraph documentation: "The eccentricity of a vertex is calculated by measuring the shortest distance from (or to) the vertex, to (or from) all other vertices in the graph, and taking the maximum."


In [ ]:
# returns a list of eccentricities, in the same order as S_giant.vs
eccentricities = S_giant.eccentricity()
eccentricity_avg = sum(eccentricities)/len(eccentricities)
eccentricity_max = max(eccentricities)
print("Average eccentricity: {}".format(eccentricity_avg))
print("Maximum eccentricity: {}".format(eccentricity_max))

In [ ]:
ecc = histogram(eccentricities).items()
plt.bar([x[0] for x in ecc], [x[1] for x in ecc])
plt.xlabel("Eccentricity values")
plt.ylabel("Frequency")
plt.show()

Another way to measure this is the average path length (between all pairs of nodes):


In [ ]:
S_giant.average_path_length()

Degree

One obvious network measure is degree--the number of nearest neighbors that a vertex has. On a directed graph, degree can be meansured as the total number of edges connected to the vertex $k$, the total number of edges coming from the vertex $k_{out}$, or the total number of edges going to a vertex $k_{in}$. Another degree measure that is often useful is $k_{nn}$, or the average degree $k$ of all of the neighbors of some vertex $n$, and $k_{nn}(k)$, the average nearest-neighbor degree of all nodes with degree $k$.


In [ ]:
k_distribution = histogram(S.degree()).items()
kin_distribution = histogram(S.indegree()).items()
kout_distribution = histogram(S.outdegree()).items()
# This call retruns a 2-tuple with two lists, [0] is simple k_nn, while [1] is k_nn(k)
knn = S.knn() 
knn_func = [(i+1, knn[1][i]) for i in range(0,len(knn[1])) if knn[1][i] > 0]

In [ ]:
fig, axes = plt.subplots(nrows=1, ncols=2, figsize = (14,5))
ax0 = axes[0]
ax1 = axes[1]

width = .35
ax0.bar([x[0] for x in kin_distribution], [x[1] for x in kin_distribution], width, color = 'r')
ax0.bar([x[0]+width for x in kout_distribution], [x[1] for x in kout_distribution], width, color = 'b')
ax0.legend(("$k_{in}$","$k_{out}$"))
ax0.set_xlabel("Degree (k)")
ax0.set_ylabel("Number of nodes with degree k")

ax1.plot([x[0] for x in knn_func],[x[1] for x in knn_func], 'b.')
ax1.set_xlabel("Node degree (k = $ k_{in} + k_{out}$)")
ax1.set_ylabel("Average nearest-neighbor degree ($k_{nn}$) for nodes with degree k")

fig.tight_layout()

Betweenness

Degree (i.e., how many followers a person has on Twitter) is an easy estimate for the influence, importance, Klout etc of a user, but it can be misleading. Imagine an example where all of my followers also follow eachother. In that community I'm not necessarily an important link between people even if I have quite a few followers. However, if I am a single link between many disjoint people, it follows that I have more control over the flow of information. In those two situations I have the same degree but a very different measure of betweenness.
A great cannonical example (which we probably don't have time to go into) is the Padget Florentine Families graph (find it on Nexus.igraph.org), which demonstrates that while the Medicis (the most powerful and well known family at the time) were neither the richest nor the most widely connected, they were an important link between many other powerful families, and so rose to great imporance.

Betweenness centrality: how important a vertex is to the connectivity of its neighbors, i.e., what fraction of shortest paths on the graph ($\sigma$) pass through (include) the vertex $v$ ($\sigma(v)$) The betweenness centrality of a vertex $v$ is: $$ g(v) = \sum_{s \neq v \neq t} \frac{\sigma_{st}(v)}{\sigma_{st}} $$


In [ ]:
centrality = S.betweenness()

Similarly, edge betweenness: how important an edge is to the connectivity of the vertices in the network, i.e., what fraction of shortest paths on the graph from some vertex $s$ to another vetex $t$ ($\sigma(s,t)$) include the edge $e$ ($\sigma(s,t \,| \, e)$) The betweenness centrality of an egde $e$: $$ g(e) = \sum_{s, t} \frac{\sigma(s,t \, | \, e)}{\sigma(s,t)} $$


In [ ]:
edge_betweenness = S.edge_betweenness()

In [ ]:
# Graph, nodes sized by betweenness
igraph.plot(S, layout = S.layout("kk"), vertex_size = [x**(.5) for x in centrality])

Assortativity

The assortativity of nodes is a simple correlation coefficient related to how likely it is for an edge to connect two nodes with some attribute in common vs two edges with some attribute that is different. An assortativity coefficient ($r$) of 0 means that there is effectively no correlation, $0 < r < 1$ means that the graph is assortative for some attribute, and an assortativity coefficient $-1 < r < 0$ means tha the graph is disassortative for some attribute.

Define $e_{ij}$ to be the fraction of edges in the connecting a vertex of type $i$ to type $j$, then the assortativity coefficient is defined as: $$ r = \frac{\sum_i e_{ii} - \sum_{i,j} e_{ij} e_{ji}}{1 - \sum_{i,j} e_{ij} e_{ji}} $$

Assortativity is easily expressed in terms of a mixing matrix $\mathbf{e}$, where the $i^{th},j^{th}$ entry of $\mathbf{e}$ is the fraction of edges from a type $i$ node to a type $j$ node. Then $r$ can be rewritten in terms of the mixing matrix as: $$ \frac{\text{Tr}\mathbf{e} - ||\mathbf{e}||}{1 - ||\mathbf{e}^2||} $$

From:
Mixing patterns in networks by M.E.J. Newman
Assortative mixing in networks by M.E.J. Newman


In [ ]:
# Calculate the assortativity coefficient for the faculty members based on their academic group (a vertex attribute)
S.assortativity('Group')

The social network is highly assortative for academic group, meaning members of the same group are more likely to associate than members of separate groups.


In [ ]:
group_to_color = {1.0: 'red', 2.0: 'blue', 3.0: 'yellow', 4.0: 'green'}
igraph.plot(S, layout = S.layout("kk"), vertex_color = [group_to_color[group] for group in S.vs['Group']])

We can also use another VertexClustering object to draw a graph of how these four groups are connected:


In [ ]:
VC_of_S_obj = igraph.VertexClustering(S)
VC_S_by_Group = VC_of_S_obj.FromAttribute(S, attribute = 'Group')
S_by_Group = VC_S_by_Group.cluster_graph(combine_vertices = {'Group': 'first'})
S_by_Group.vs['size']  = [len(x) for x in VC_S_by_Group]
igraph.plot(S_by_Group, layout = S_by_Group.layout("kk"), bbox = (200,200), 
            vertex_color = [group_to_color[group] for group in S_by_Group.vs['Group']])

Degree assortativity is a special case of assortativty, and measures the likelihood of connections between nodes with a similar degree.


In [ ]:
S.assortativity_degree()

The network is non-assortative for degree.

Similarity

Temporal & neighborhood similarity (correlation) across nodes

[#ShamelessPlug]

Communities

A connected graph is not necessarily homogenously connected. We saw assortativity for what types of vertices like to connect to eachother, but what about groups that can't be defined by known vertex attributes? In social networks, some groups of users are more tightly connected than others--we see groups of tightly connected users ("communities") that have a few ties going to the world around them. This idea is very important for identifying target groups, finding commonalities, and even spotting bots (check this out from our friends at Sysmos: https://twitter.com/edkim_mw/status/476731214276329472/). Ideas about communities can be usefully extended to other types of networks, like topic identification: think of words as vertices, and their appearance in the same text as an edge. Words that appear together most commonly will form communities--topics.

The basic premise for identifying a network comminuty is identifying groups of users that are more tightly connected than others. There are several available algorithms (of varying efficacy, computational cost, and complexity), I'm going to focus on four.

  1. Edge betweenness community detection
    • Community structure in social and biological networks by M. Girvan and M.E.J. Newman
    • Progressively remove the edges in the graph with the highest betweenness, effectively cutting communities off from one another
    • Very expensive to compute, impractical for large networks
    • igraph implementation: Graph.community_edge_betweenness()
    • Works for weighted, directed graphs
  2. Modulatity maximization
    • Finding community structure in very large networks by A. Clauset, M.E.J. Newman, C. Moore
    • Begin with every vertex in its own community, then greedily maximize modularity, a measure of in-group vs out-group connections by agglomerating existing clusters.
    • Practical for larger networks.
    • Has serious problems with resolution--it cannot resolve communties below a certain size, and often mistakenly groups together disjoint communities. A good review of why modularity-maximization based approaches are flawed in this way is presented in Resolution limit in community detection by S. Fortunato and M. Barthélemy.
    • igraph implementation: Graph.community_fastgreedy()
    • Works for weighted, un-directed graphs
  3. Random walk
    • Computing communities in large networks using random walks by P. Pons and M. Latapy
    • Based on the idea that a random walker following graph edges will spend a disproportionate amount of time in a community
    • Slow, but not impossibly so, for larger networks
    • I have'nt tdone as much reading about this algorithm as the others, so I am not aware of other pros/ cons.
    • igraph implementation: Graph.community_walktrap()
    • Works for weighted, directed graphs
  4. Probabalisting community groupings
    • A Bayesian Approach to Network Modularity by J. Hoffman and C. Wiggins, source code at vbmod.sf.net
    • Fast, no resolution limit
    • Not implemented in igraph, but source is supplied for python and I bet we could get it working pretty seamlessly with igraph--but I need some time/ help figuring out what the heck is going on with the code.

A quick(ish) review of much of the available literature & code on the subject of community detection can be found in Community detection in graphs by S. Fortunato.

1. Community edge betweenness


In [ ]:
S_dendrogram_eb = S.community_edge_betweenness(weights = 'weight')
S_communities_eb = S_dendrogram_eb.as_clustering()
igraph.plot(S_dendrogram_eb)

In [ ]:
igraph.plot(S_communities_eb, layout = S.layout("kk"))

2. Fast-greedy modularity maximization


In [ ]:
# This algorithm only takes undirected graphs, so we get to use some graph manipulation tools from earlier
S_undirected = S.copy()
S_undirected.to_undirected(mode = "mutual", combine_edges = {'weight': sum})
S_dendrogram_fg = S_undirected.community_fastgreedy(weights = 'weight')
S_communities_fg = S_dendrogram_fg.as_clustering()
igraph.plot(S_dendrogram_fg)

In [ ]:
igraph.plot(S_communities_fg, layout = S.layout("kk"))

3. Random walk, "walk-trap" method


In [ ]:
S_dendrogram_wt = S.community_walktrap(weights = 'weight')
S_communities_wt = S_dendrogram_wt.as_clustering()
igraph.plot(S_dendrogram_wt)

In [ ]:
igraph.plot(S_communities_wt, layout = S.layout("kk"))

4. Variational Bayes module detection

Download the vbmod python distribution on vbmod.sf.net, and figure out how it works.

Real data!

Which we did not use before because it's waaaay slower than the little graph we're working with. I would, however, encourage you to evaluate these cells after we're done and see what can work with real live Disqus social network data. Everything below this runs quite nicely with this large dataset, but even so, be prepared to wait a few minutes for each cell to run on your laptop.

This data is all Disqus activity on ~20 top urls in varying topics (business, news, gossip, trivia, consumer info etc) for Disqus traffic over two days (4-21-2014 and 4-22-2014). Each edge in the graph has a few attributes, including a 'conversation' attribute that tells up on what article url the activity that generated that edge was created.


In [ ]:
# Load the Disqus social network data from my provided graph file
# This was created (over the course of several hours) using build_user_graph from the command line
Disqus_network = pickle.load(open("output_graphs/test_graph.p","r"))
# Make a copy that we can simplify, for the sake of analysis
D = Disqus_network.copy()
# Define our own functions to combine edges
def concat_conv(*args):
    return args[0]
def concat_list(*args):
    return [x for sublist in args[0] for x in sublist]
# Simply the graph--remove multiple edges, remove self loops
D.simplify(multiple = True, loops = True, 
           combine_edges = {'weight':sum, 'conversation': concat_conv, 'record':'ignore', 'time':'ignore'})
# Find the giant connected component (for most of out analysis we'll use this piece)
# Make note of how many nodes this giant component has compared to the rest of the graph.
D_clusters = D.clusters(mode = 'WEAK')
D_giant = D_clusters.giant()
# And make an undirected copy of the giant component, which we can use for algorothms that need undirected input
D_undirected = D_giant.as_undirected(mode = 'collapse', combine_edges = {'weight':sum, 'conversation': concat_list})

In [ ]:
# Let's look at what we've got:
print("Disqus_network, the totally un-simplified graph: {} \n".format(Disqus_network.summary()))
print("D, without multi-edges and without self-loops: {} \n".format(D.summary()))
print("D_giant, the giant (strongly connected) component: {} \n".format(D_giant.summary()))
print("D_undirected, the undirected graph: {} \n".format(D_undirected.summary()))

In [ ]:
cluster_sizes = [len(x) for x in D_clusters]

Degree


In [ ]:
k_distribution_D = histogram(D.degree()).items()
kin_distribution_D = histogram(D.indegree()).items()
kout_distribution_D = histogram(D.outdegree()).items()
# This call retruns a 2-tuple with two lists, [0] is simple k_nn, while [1] is k_nn(k)
knn_D = D.knn() 
knn_func_D = [(i+1, knn_D[1][i]) for i in range(0,len(knn_D[1])) if knn_D[1][i] > 0]

In [ ]:
fig_D, axes_D = plt.subplots(nrows=1, ncols=2, figsize = (14,5))
ax0_D = axes_D[0]
ax1_D = axes_D[1]

width_D = .35
ax0_D.bar([x[0] for x in kin_distribution_D[0:30]], [x[1] for x in kin_distribution_D[0:30]], width_D, color = 'r')
ax0_D.bar([x[0]+width for x in kout_distribution_D[0:30]], [x[1] for x in kout_distribution_D[0:30]], width_D, color = 'b')
ax0_D.legend(("$k_{in}$","$k_{out}$"))
ax0_D.set_xlabel("Degree (k)")
ax0_D.set_ylabel("Number of nodes with degree k")

ax1_D.plot([x[0] for x in knn_func_D],[x[1] for x in knn_func_D], 'b.')
ax1_D.set_xlabel("Node degree (k = $ k_{in} + k_{out}$)")
ax1_D.set_ylabel("Average nearest-neighbor degree ($k_{nn}$) for nodes with degree k")

fig_D.tight_layout()

Small-world connectivity and vertex eccentricity


In [ ]:
# returns a list of eccentricities, in the same order as D_giant.vs
# If you want to see this, you have to be patient. It takes a few minutes (~ five)
eccentricities_D = D_giant.eccentricity(mode = 'ALL') # mode = 'All' means ignoring edge direction
eccentricity_avg_D = sum(eccentricities_D)/len(eccentricities_D)
eccentricity_max_D = max(eccentricities_D)
print("Average eccentricity: {}".format(eccentricity_avg_D))
print("Maximum eccentricity: {}".format(eccentricity_max_D))

In [ ]:
ecc_D = histogram(eccentricities_D).items()
plt.bar([x[0] for x in ecc_D], [x[1] for x in ecc_D])
plt.xlabel("Eccentricity values")
plt.ylabel("Frequency")
plt.show()

Community structure in a real social network


In [ ]:
D_dendrogram_wt = D_giant.community_walktrap(weights = 'weight')
D_communities_wt = D_dendrogram_wt.as_clustering(500) 
# 500 mean that I'm arbitrarily choosing that there should be roughly 500 clusters.
# Not a great thing to do, but otherwise we get so many clusters that it's not useful as an example
D_dendrogram_wt.optimal_count

In [ ]:
igraph.plot(D_communities_wt.cluster_graph(), vertex_size = [len(x)**(.5) for x in D_communities_fg])

That looks...spiky.


In [ ]:
D_undirected = D_giant.as_undirected(mode = 'collapse', combine_edges = {'weight':sum, 'conversation': concat_list})
D_dendrogram_fg = D_undirected.community_fastgreedy()
D_communities_fg = D_dendrogram_fg.as_clustering()
D_dendrogram_fg.optimal_count

In [ ]:
# Look at how communities are connected
igraph.plot(D_communities_fg.cluster_graph(), vertex_size = [len(x)**(.5) for x in D_communities_fg])

So what? We can see that the community finding algorithms give pretty different results, and we know that somebody has to be off. How can we say that they are giving us anything useful?

Well, one thing we can see in that last graph is that we seem to have two very big clusters opposite eachother. Let's see what kind of conversation-edges those clusters contain.


In [ ]:
cluster_sizes_D = [len(x) for x in D_communities_fg]
sorted_cluster_sizes = sorted(cluster_sizes_D,reverse = True)
percentages = [sum(sorted_cluster_sizes[0:x])/float(sum(cluster_sizes_D)) for x in xrange(1,len(cluster_sizes_D))]

Arbitrary cuttoff time. The 9 largest community clusters in D_communities_fg encompass $> 90\% $ of the vertices in the graph, let's care about the biggest communities first.


In [ ]:
# Make a list of all of the conversation-edges in each community:
# conversations is a list of dicts (where each dict )
conversations = [{} for x in range(0,len(D_communities_fg))]
for i, cluster in enumerate(D_communities_fg):
    sample_subgraph = D_undirected.subgraph(cluster)
    edge_c = sample_subgraph.es['conversation']
    cs = []
    for c in edge_c:
        cs.extend(c)
    conversations[i] = sorted(histogram(cs).items(), key=lambda x: x[1], reverse = True)

In [ ]:
conversations[cluster_sizes_D.index(sorted_cluster_sizes[0])]

In [ ]:
conversations[cluster_sizes_D.index(sorted_cluster_sizes[1])]

In [ ]:
conversations[cluster_sizes_D.index(sorted_cluster_sizes[2])]

In [ ]:
conversations[cluster_sizes_D.index(sorted_cluster_sizes[3])]