1
我正在尝试使用群集,我很惊讶它似乎很慢。我制作了一个包含30个社区的随机图,每个社区包含30个节点。社区中的节点具有90%的连接机会,并且不在同一社区中的节点之间的边缘具有10%的连接机会。我正在测量两个节点之间的相似度,作为其邻居集之间的Jaccard similarity。使用DBSCAN进行群集出奇的慢
这个玩具的例子在dbscan零件上花费大约15秒,如果我增加节点数量,这个例子会非常迅速地增加。由于总共只有900个节点,这看起来非常缓慢。
from __future__ import division
import numpy as np
from sklearn.cluster import dbscan
import networkx as nx
import matplotlib.pyplot as plt
import time
#Define the Jaccard distance. Following example for clustering with Levenshtein distance from from http://scikit-learn.org/stable/faq.html
def jaccard_distance(x,y):
return 1 - len(neighbors[x].intersection(neighbors[y]))/len(neighbors[x].union(neighbors[y]))
def jaccard_metric(x,y):
i, j = int(x[0]), int(y[0]) # extract indices
return jaccard_distance(i, j)
#Simulate a planted partition graph. The simplest form of community detection benchmark.
num_communities = 30
size_of_communities = 30
print "planted partition"
G = nx.planted_partition_graph(num_communities, size_of_communities, 0.9, 0.1,seed=42)
#Make a hash table of sets of neighbors for each node.
neighbors={}
for n in G:
for nbr in G[n]:
if not (n in neighbors):
neighbors[n] = set()
neighbors[n].add(nbr)
print "Made data"
X= np.arange(len(G)).reshape(-1,1)
t = time.time()
db = dbscan(X, metric = jaccard_metric, eps=0.85, min_samples=2)
print db
print "Clustering took ", time.time()-t, "seconds"
我怎样才能使这个更具伸缩性较大的节点数目?
这是一个真正伟大的答案。谢谢! – eleanora
根据您想要缩放的节点数量,可能需要更多技巧。例如,保存'D'的内存可能会成为一个问题,这些稀疏矩阵(由[DBSCAN](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html)接受)可能帮助。 –
听起来不错。我一定会研究它,因为我想使用更大的图。 – eleanora