\(\)

Source code for community detection can be found at https://github.com/vtraag/louvain-igraph, but it’s easier to simply use the python package directly from https://pypi.python.org/pypi/louvain/. Here are some of its features:

- Constant Potts Model (CPM) resolution free community detection
- Modularity optimization
- Reichardt-Bornholdt model with Erdos-Renyi null-model
- Dealing with negative links (i.e. with negative weights)
- Tuneable resolution parameters
- Multislice community detection

This implementation is designed to be used with python and igraph. You are highly recommended to use this implementation. It is fast and flexible, implements a variety of different methods, and is easy to use.

An earlier implementation used pure Python, so wasn’t nearly as fast. Even earlier is another C++ implementation, but that was quite a headache to use because of some weird formats. I only refer to these should you have some interest in them still, but normally you shouldn’t use these.

Finally, here is an important small note concerning the significance of a partition. The significance of a partition \(\sigma\) is defined as follows

\[\mathcal{S}(\sigma) = \sum_c {n_c \choose 2} D(p_c \parallel p),\]

where \(n_c\) is the number of nodes in community \(c\), \(p_c\) is the

density of community \(c\) and \(p\) is the density of the graph as a whole.

\(D(p_c \parallel p)\) represent the binary Kullback-Leibler divergence:

\[D(q \parallel p) = q \log \frac{q}{p} + (1 – q) \log \frac{1 – q}{1 – p}.\]

In practice we used

\[\mathcal{S}(\sigma) = \sum_c n_c(n_c – 1) D(p_c \parallel p),\]

because the division by \(2\) is simply a scaling factor. This significance of a random graph is expected to behave as \(\mathcal{S}(\sigma) \sim n \log n\). So you should compare the significance of the observed partition to \(n \log n\) using this definition or to \(\frac{1}{2} n \log n\) using the earlier definition.

Hi, Doc. Vincent Traag,

I am using your python code Louvain recently. As you cited in the Mucha P’s science paper in 2010, I want to use weight coupling strengths ω. But there is a problem that it’s barbarized in the code. Is there any way that I can add weight to the inter-slice links?

Looking forward to your reply!

Thank you so much!

Dear Peng Fang,

You can indeed use this implementation to use Mucha et al.’s idea for community detection. However, it is a bit more trickier, and you have to prepare some data yourself. This code works with layers, not with slices, the difference being that all layers need to be the same graphs, i.e. all the nodes should be contained in all graphs and only the edges can differ. To use time slices for example in this module, you should first join everything together so that all edges for all times are contained within one graph, and also include the interslice edges. Then create a layer for each subgraph that only includes the appropriate edges for that particular time. Finally, also add the layer containing only the interslice edges.

In a bit more detail, suppose you have you have graph

`G1`

at time 1 and`G2`

at time 2. Then create a graph which combines`G1`

and`G2`

, and add the the interslice links. Graph`G`

thus has`n = n1 + n2`

nodes and`m = m1 + m2`

edges. Hence, if node`i`

appears both in`G1`

and`G2`

, it appears twice in`G`

. Also add the interslice edges to`G`

(i.e. connecting node`i`

whenever it is both in`G1`

and`G2`

). Then take subgraph of the edges (but leave the nodes in place) for time 1 and time 2, and for the interslice edges, and use these as layer.Perhaps an explicit example makes it more clear even:

Dear Vincent,

Hi,

Thank you very much for your codes! I tried your script on a set of weighted graphs. The “E” that I create, looks correct, and shows the weights of links both for within and between slices. But when making “G”, it seems that in the GraphFromPandasEdgeList it only look at the edgelist and disregard the weights and binarize the input (if weight>0 : weight=1). How may I modify the script, so for finding partition it uses the weights?

Thanks,

Mahshid

Hi Mashid,

Sorry for the long delay in replying (and approving your comment), your message got lost among other messages.

There was a bug with the passing of the weights from python to c++: they were parsed as integers, which is now fixed. However, this is not (yet) part of any release, so you have to download the source and compile yourself. Alternatively, you can simply take the most recent development version (which also includes other changes).

I am currently working towards a release version, but it will take some time still before that is done. So if compiling if a bridge too far, you may simply want to wait for that release.

Best,

Vincent

Dear Vincent, (dear Peng)

that is exactly what I am looking for ! I can get the example running nicely. However, I am now stuck adapting it to my own data. Basically, I have exported a multi-slice adjanceny matrix (i.e. [50*50*20] – 20 being the temporal samples from matlab, which is where I used to do things. I read that in as a numpy array, but then get stuck trying to set up the multi-layers (as opposed to the slices, which would be easier). Somehow I cannot figure out the right way through numpy->pandas->igraph->subgraph that will prepare the data for analysis. Any hints would be appreciated.

Best, ralf.

Dear Ralf,

I have some code lying around for making this easier, assuming you have a sequence of graphs

`Gt = [G_0, G_1, ....]`

. I am currently working on a re-implementation of my Python code. The code will probably find its way in the public package, but I’m not completely done (yet) with the re-implementation.If you want, I can send you the code via mail. I’ll first have to tidy it up a bit, and to make sure it is working with the current public implementation. But perhaps you don’t have a sequence of graphs yet, so that you need some other help first? Let me know.

Best,

Vincent

Dear Dr. Vincent Traag,

Thank you for your fantastic library! May I ask you two questions?

I checked the quality function for RBConfiguration, which is below. Could you tell me how you found this function (reference)? It looks very different from what I can find I failed to understand it.

When I call this function, I write

But in this case, graphs which have only one community always get the highest quality. How does this resolution_parameter argument work?

Thank you in advance.

Sincerely,

Keisuke Daimon

Sorry, but let me add one thing. This is how I calculate the variable, comm:

Dear Keisuke,

The formulation for modularity is a generalised form by Reichardt and Bornholdt [1] of the modularity introduced by Newman and Girvan [2]. They also introduced the resolution parameter (among other things). In a sense, the formulation also builds on some minor variations for directed networks [3] and weighted networks [4], which are straightforward extensions of the definition in [1]. The formulation as used in the code is applicable to directed and non-directed networks and weighted and non-weighted networks.

A higher resolution parameter essentially puts a higher penalty on community size, so that you get smaller communities for higher resolution parameters, and larger communities for lower resolution parameters. Since for a very low (near zero) resolution parameter you essentially incur (near) zero cost, you only obtain the benefit of putting all links within that single community, and so the modularity equals the number of edges \(m\) within the graph (or the total weight). For a very high resolution parameter you get a singleton partition, and obtain zero benefit (assuming no loops in the network), and incur a cost of the community size \(- \frac{\gamma}{4m} \sum_i k_i^2\). Hence, the modularity decreases from \(m\) to \(- \frac{\gamma}{4m} \sum_i k_i^2\) from a low resolution parameter to a high resolution parameter.

In summary, you can’t rely on the modularity values to choose the ‘best’ partition, since it always decreases, and you need to rely on other ways to choose the right resolution. This is what we tried to do in [5]. But more profoundly, automatically selecting the ‘right’ partition is bound to create some other issues (similar to the resolution limit). In the end, choosing the ‘right’ resolution depends on your goals and should probably also be guided by more substantive concerns.

Hope this helps.

Best,

Vincent

## References

Dear Dr. Vincent Traag,

Thank you for your answer. I actually confused Significance vertex partition with Significance for evaluation of structures. What methods do you have for louvain.quality? I tried modularity, RBConfiguration and Significance. Do you have more? Also, do you recommend Significance?

Dear Kuisuke,

There are two different possible uses of Significance (or any other quality function for that matter). You can optimize Significance directly, which is what you do if you call

You can always evaluate the quality of any given

`partition`

usingThe available methods are listed on the PyPI package page and on the GitHub repository. They can all be used to optimize directly or to simply evaluate the quality of a given partition.

The combination of doing the bisectioning (as you do) and using Significance to guide your selection of interesting resolutions should work quite well, I think. Optimizing Significance immediately works quite well as well, but seems to give somewhat smaller communities, similar to Surprise (see also [1]).

Best,

Vincent

## References

I am trying methods other than Significance because my data includes weight and direction. I will use it as a quality function.

Thank you for the explanation.

You’re quite correct to use other methods than Significance if you have a weighted network. However, whether you use Significance as a quality function (i.e. determine the quality of a given partition) or for optimization (i.e. try to find the partition with the best quality), it really is one and the same thing. So you can’t use Significance at all if you have a weighted network (you can use Surprise though). Note that Significance is well-defined for directed networks by the way.

Good luck!

I’m sorry for the late reply.

As far as I understand, Significance is useful to know what resolution is the best for making clusters. If Significance is not applicable to weighed networks, is there no method to choose the best resolution?

If so, I am thinking to choose the value of resolution such that I have 4 clusters. Is this a good idea?

Yes, you could use Significance to guide your search for “good” resolutions. Surprise is a very similar measure, see [1], which you could use for the same purpose.

If you want 4 clusters, then selecting a resolution such that you get 4 clusters seems like a good idea.

Thank you for your help. I will read the paper about Surprise.

I will just avoid Significance because my data is weighed, and try to decide how many clusters I should get and what method I should use.