2016-07-26 40 views
1

我正在尝试使用群集,我很惊讶它似乎很慢。我制作了一个包含30个社区的随机图,每个社区包含30个节点。社区中的节点具有90%的连接机会,并且不在同一社区中的节点之间的边缘具有10%的连接机会。我正在测量两个节点之间的相似度,作为其邻居集之间的Jaccard similarity使用DBSCAN进行群集出奇的慢

这个玩具的例子在dbscan零件上花费大约15秒,如果我增加节点数量,这个例子会非常迅速地增加。由于总共只有900个节点,这看起来非常缓慢。

from __future__ import division 
import numpy as np 
from sklearn.cluster import dbscan 
import networkx as nx 
import matplotlib.pyplot as plt 
import time 

#Define the Jaccard distance. Following example for clustering with Levenshtein distance from from http://scikit-learn.org/stable/faq.html 
def jaccard_distance(x,y): 
    return 1 - len(neighbors[x].intersection(neighbors[y]))/len(neighbors[x].union(neighbors[y])) 

def jaccard_metric(x,y): 
    i, j = int(x[0]), int(y[0])  # extract indices 
    return jaccard_distance(i, j) 

#Simulate a planted partition graph. The simplest form of community detection benchmark. 
num_communities = 30 
size_of_communities = 30 
print "planted partition" 
G = nx.planted_partition_graph(num_communities, size_of_communities, 0.9, 0.1,seed=42) 

#Make a hash table of sets of neighbors for each node. 
neighbors={} 
for n in G: 
    for nbr in G[n]: 
     if not (n in neighbors): 
      neighbors[n] = set() 
     neighbors[n].add(nbr) 

print "Made data" 

X= np.arange(len(G)).reshape(-1,1) 

t = time.time() 
db = dbscan(X, metric = jaccard_metric, eps=0.85, min_samples=2) 
print db 

print "Clustering took ", time.time()-t, "seconds" 

我怎样才能使这个更具伸缩性较大的节点数目?

回答

3

这里说加快DBSCAN电话约1890年倍我的机器上的解决方案:

# the following code should be added to the question's code (it uses G and db) 

import igraph 

# use igraph to calculate Jaccard distances quickly 
edges = zip(*nx.to_edgelist(G)) 
G1 = igraph.Graph(len(G), zip(*edges[:2])) 
D = 1 - np.array(G1.similarity_jaccard(loops=False)) 

# DBSCAN is much faster with metric='precomputed' 
t = time.time() 
db1 = dbscan(D, metric='precomputed', eps=0.85, min_samples=2) 
print "clustering took %.5f seconds" %(time.time()-t) 

assert np.array_equal(db, db1) 

这里输出:

... 
Clustering took 8.41049790382 seconds 
clustering took 0.00445 seconds 
+0

这是一个真正伟大的答案。谢谢! – eleanora

+0

根据您想要缩放的节点数量,可能需要更多技巧。例如,保存'D'的内存可能会成为一个问题,这些稀疏矩阵(由[DBSCAN](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html)接受)可能帮助。 –

+0

听起来不错。我一定会研究它,因为我想使用更大的图。 – eleanora