2
基于Unbalanced factor of KMeans?,我试图计算不平衡因子,但是我失败了。加在星火变坏
RDD r2_10
的每个元素都是一对,其中密钥是簇,值是一个点的元组。所有这些都是ID。下面我介绍会发生什么:
In [1]: r2_10.collect()
Out[1]:
[(0, ('438728517', '28138008')),
(13824, ('4647699097', '6553505321')),
(9216, ('2575712582', '1776542427')),
(1, ('8133836578', '4073591194')),
(9217, ('3112663913', '59443972', '8715330944', '56063461')),
(4609, ('6812455719',)),
(13825, ('5245073744', '3361024394')),
(4610, ('324470279',)),
(2, ('2412402108',)),
(3, ('4766885931', '3800674818', '4673186647', '350804823', '73118846'))]
In [2]: pdd = r2_10.map(lambda x: (x[0], 1)).reduceByKey(lambda a, b: a + b)
In [3]: pdd.collect()
Out[3]:
[(13824, 1),
(9216, 1),
(0, 1),
(13825, 1),
(1, 1),
(4609, 1),
(9217, 1),
(2, 1),
(4610, 1),
(3, 1)]
In [4]: n = pdd.count()
In [5]: n
Out[5]: 10
In [6]: total = pdd.map(lambda x: x[1]).sum()
In [7]: total
Out[7]: 10
和total
应该有点数的总数。然而,这是10 ...目标是成为22!
我在这里错过了什么?
顺便说有一些有用的方法,你可以使用,如[键](https://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=rdd#pyspark .RDD.keys)或[mapValues](https://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=rdd#pyspark.RDD.mapValues)。 –
我想知道你为什么提到'钥匙()'阿尔贝托,我看不出它是如何帮助这里的...... – gsamaras
因为你可以计算'钥匙'的数量。例如'keys = rdd.keys()。count()' –