2016-07-05 55 views
8

我想从包含单词列表的DataFrame转换为DataFrame,每个单词在其自己的行中。在PySpark中爆炸

如何在DataFrame中的某列上爆炸?

下面是一些我尝试的例子,您可以取消注释每个代码行并获得以下注释中列出的错误。我在Python 2.7中使用PySpark和Spark 1.6.1。

from pyspark.sql.functions import split, explode 
DF = sqlContext.createDataFrame([('cat \n\n elephant rat \n rat cat',)], ['word']) 
print 'Dataset:' 
DF.show() 
print '\n\n Trying to do explode: \n' 
DFsplit_explode = (
DF 
.select(split(DF['word'], ' ')) 
# .select(explode(DF['word'])) # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;" 
# .map(explode) # AttributeError: 'PipelinedRDD' object has no attribute 'show' 
# .explode() # AttributeError: 'DataFrame' object has no attribute 'explode' 
).show() 

# Trying without split 
print '\n\n Only explode: \n' 

DFsplit_explode = (
DF 
.select(explode(DF['word'])) # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;" 
).show() 

请指点

回答

13

explodesplit是SQL函数。两者都在SQL Column上运行。 split将Java正则表达式作为第二个参数。如果你想单独对任意空格的数据,你需要这样的事:

df = sqlContext.createDataFrame(
    [('cat \n\n elephant rat \n rat cat',)], ['word'] 
) 

df.select(explode(split(col("word"), "\s+")).alias("word")).show() 

## +--------+ 
## | word| 
## +--------+ 
## |  cat| 
## |elephant| 
## |  rat| 
## |  rat| 
## |  cat| 
## +--------+ 
6

拆就空白,并删除空行,添加where条款。

DF = sqlContext.createDataFrame([('cat \n\n elephant rat \n rat cat\nmat\n',)], ['word']) 

>>> (DF.select(explode(split(DF.word, "\s")).alias("word")) 
     .where('word != ""') 
     .show()) 

+--------+ 
| word| 
+--------+ 
|  cat| 
|elephant| 
|  rat| 
|  rat| 
|  cat| 
|  mat| 
+--------+ 
+0

谢谢你添加where子句。 – user1982118

+1

对于一个稍微更完整的解决方案,它可以概括为必须报告多个列的情况,请使用'withColumn'而不是简单的'select'即: df.withColumn('word',explode('word') ).show() 这确保在使用爆炸之后,DataFrame中的所有其余列仍存在于输出DataFrame中。这比指定每个需要被选择的列简单得多,即: df.select('col1','col2',...,'colN',explode('word'))。show() –