2017-01-03 178 views
2

我有一个Hive表,其中包含文本数据和与每个文档关联的一些元数据。看起来像这样。pyspark中的聚合稀疏矢量

from pyspark.ml.feature import Tokenizer 
from pyspark.ml.feature import CountVectorizer 

df = sc.parallelize([ 
    ("1", "doc_1", "fruit is good for you"), 
    ("2", "doc_2", "you should eat fruit and veggies"), 
    ("2", "doc_3", "kids eat fruit but not veggies") 
]).toDF(["month","doc_id", "text"]) 


+-----+------+--------------------+ 
|month|doc_id|    text| 
+-----+------+--------------------+ 
| 1| doc_1|fruit is good for...| 
| 2| doc_2|you should eat fr...| 
| 2| doc_3|kids eat fruit bu...| 
+-----+------+--------------------+ 

我想按月计算单词。按月

tokenizer = Tokenizer().setInputCol("text").setOutputCol("words") 
tokenized = tokenizer.transform(df) 

cvModel = CountVectorizer().setInputCol("words").setOutputCol("features").fit(tokenized) 
counted = cvModel.transform(tokenized) 

+-----+------+--------------------+--------------------+--------------------+ 
|month|doc_id|    text|    words|   features| 
+-----+------+--------------------+--------------------+--------------------+ 
| 1| doc_1|fruit is good for...|[fruit, is, good,...|(12,[0,3,4,7,8],[...| 
| 2| doc_2|you should eat fr...|[you, should, eat...|(12,[0,1,2,3,9,11...| 
| 2| doc_3|kids eat fruit bu...|[kids, eat, fruit...|(12,[0,1,2,5,6,10...| 
+-----+------+--------------------+--------------------+--------------------+ 

现在我想组和返回的东西,看起来像: 到目前为止,我已经采取了CountVectorizer方法

month word count 
1  fruit 1 
1  is  1 
... 
2  fruit 2 
2  kids 1 
2  eat 2 
... 

我怎么能这样做呢?

回答

2

Vector聚合没有内置机制,但您不需要在这里。一旦你已经符号化数据你可以explode和汇总:

from pyspark.sql.functions import explode 

(counted 
    .select("month", explode("words").alias("word")) 
    .groupBy("month", "word") 
    .count()) 

如果您更喜欢限制结果为vocabulary只需添加一个过滤器:

from pyspark.sql.functions import col 

(counted 
    .select("month", explode("words").alias("word")) 
    .where(col("word").isin(cvModel.vocabulary)) 
    .groupBy("month", "word") 
    .count()) 
相关问题