2017-07-17 40 views
0

我有一组文件,每个文件都属于特定页面。我已经计算了每个文档的TFIDF分数,但是我想要做的是根据其文档平均每个页面的TFIDF分数。按群组划分的PySpark平均TFIDF功能

期望的输出是N(页)x M(词汇)矩阵。我将如何去在Spark/PySpark中做这件事?从管道

from pyspark.ml.feature import CountVectorizer, IDF, Tokenizer, StopWordsRemover 
from pyspark.ml import Pipeline 

tokenizer = Tokenizer(inputCol="message", outputCol="tokens") 
remover = StopWordsRemover(inputCol=tokenizer.getOutputCol(), outputCol="filtered") 
countVec = CountVectorizer(inputCol=remover.getOutputCol(), outputCol="features", binary=True) 
idf = IDF(inputCol=countVec.getOutputCol(), outputCol="idffeatures") 

pipeline = Pipeline(stages=[tokenizer, remover, countVec, idf]) 

model = pipeline.fit(sample_results) 
prediction = model.transform(sample_results) 

输出是在下面的格式。每个文档一行。

(466,[10,19,24,37,46,61,62,63,66,67,68,86,89,105,107,129,168,217,219,289,310,325,377,381,396,398,411,420,423],[1.6486586255873816,1.6486586255873816,1.8718021769015913,1.8718021769015913,2.159484249353372,2.159484249353372,2.159484249353372,2.159484249353372,2.159484249353372,2.159484249353372,2.159484249353372,2.159484249353372,2.159484249353372,2.159484249353372,2.159484249353372,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367,2.5649493574615367]) 

回答

0

我想出了下面的答案。它的工作原理,但不知道它是最有效的。我基于它this post

def as_matrix(vec): 
    data, indices = vec.values, vec.indices 
    shape = 1, vec.size 
    return csr_matrix((data, indices, np.array([0, vec.values.size])), shape) 

def as_array(m): 
    v = vstack(m).mean(axis=0) 
    return v 


mats = prediction.rdd.map(lambda x: (x['page_name'], as_matrix(x['idffeatures']))) 
final = mats.groupByKey().mapValues(as_array).cache() 

我把最后一个放到一个86×10000的numpy矩阵中。一切都在运行,但有点缓慢。

labels = [l[0] for l in final] 
tf_matrix = np.vstack([r[1] for r in final])