2017-04-08 23 views
-2

我试过地图,mapValues和排序,但没有任何作品。 问题描述如下: “通过相似性(值中的第二个)”,如果相同,则选择具有最小ID的用户(该值中的第一个)。“ 和键 - 值对的列表是:如何在pyspark中执行这个排序过程?

[ 
    (18, [(2, 0.5)]), 
    (30, [(19, 0.5), (6, 0.25)]), 
    (6, [(30, 0.25), (20, 0.2), (19, 0.2)]), 
    (19, [(30, 0.5), (8, 0.2), (6, 0.2)]), 
    (2, [(18, 0.5)]), 
    (26, [(9, 0.2)]), 
    (9, [(26, 0.2)]) 
] 

我想:

[ 
    (18, [(2, 0.5)]), 
    (30, [(19, 0.5), (6, 0.25)]), 
    (6, [(30, 0.25), (19, 0.2)]), 
    (19, [(30, 0.5), (6, 0.2)]), 
    (2, [(18, 0.5)]), 
    (26, [(9, 0.2)]), 
    (9, [(26, 0.2)]) 
] 

谢谢你很多!

回答

0

非常直截了当。只需要找出必要的转换:

data = [(18, [(2, 0.5)]), 
(30, [(19, 0.5), (6, 0.25)]), 
(6, [(30, 0.25), (20, 0.2), (19, 0.2)]), 
(19, [(30, 0.5), (8, 0.2), (6, 0.2)]), 
(2, [(18, 0.5)]), 
(26, [(9, 0.2)]), 
(9, [(26, 0.2)])] 

rdd1 = sc.parallelize(data) 

rdd2 = rdd1.flatMapValues(lambda x:x) 

rdd3 = rdd2.map(lambda x: ((x[0], x[1][1]),x[1][0])) 

rdd4 = rdd3.reduceByKey(min) 

rdd5 = rdd4.map(lambda x: (x[0][0], (x[1], x[0][1]))) 

rdd6 = rdd5.reduceByKey(lambda x,y: [x,y]) 
rdd6.collect() 
[(9, (26, 0.2)), 
(26, (9, 0.2)), 
(18, (2, 0.5)), 
(30, [(6, 0.25), (19, 0.5)]), 
(2, (18, 0.5)), 
(6, [(30, 0.25), (19, 0.2)]), 
(19, [(30, 0.5), (6, 0.2)])] 
相关问题