在PySpark中爆炸

时间:2016-07-05 18:45:39

标签: python apache-spark pyspark apache-spark-sql

我想从包含单词列表的DataFrame转换为DataFrame,每个单词都在自己的行中。

如何对DataFrame中的列进行爆炸?

以下是我的一些尝试示例,您可以在其中取消注释每个代码行并获取以下注释中列出的错误。我在Python 2.7中使用PySpark和Spark 1.6.1。

from pyspark.sql.functions import split, explode
DF = sqlContext.createDataFrame([('cat \n\n elephant rat \n rat cat', )], ['word'])
print 'Dataset:'
DF.show()
print '\n\n Trying to do explode: \n'
DFsplit_explode = (
 DF
 .select(split(DF['word'], ' '))
#  .select(explode(DF['word']))  # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;"
#   .map(explode)  # AttributeError: 'PipelinedRDD' object has no attribute 'show'
#   .explode()  # AttributeError: 'DataFrame' object has no attribute 'explode'
).show()

# Trying without split
print '\n\n Only explode: \n'

DFsplit_explode = (
 DF 
 .select(explode(DF['word']))  # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;"
).show()

请咨询

2 个答案:

答案 0 :(得分:25)

explodesplit是SQL函数。两者都在SQL Column上运行。 split将Java正则表达式作为第二个参数。如果你想在任意空格上分隔数据,你需要这样的东西:

df = sqlContext.createDataFrame(
    [('cat \n\n elephant rat \n rat cat', )], ['word']
)

df.select(explode(split(col("word"), "\s+")).alias("word")).show()

## +--------+
## |    word|
## +--------+
## |     cat|
## |elephant|
## |     rat|
## |     rat|
## |     cat|
## +--------+

答案 1 :(得分:13)

要拆分空格并删除空行,请添加where子句。

DF = sqlContext.createDataFrame([('cat \n\n elephant rat \n rat cat\nmat\n', )], ['word'])

>>> (DF.select(explode(split(DF.word, "\s")).alias("word"))
       .where('word != ""')
       .show())

+--------+
|    word|
+--------+
|     cat|
|elephant|
|     rat|
|     rat|
|     cat|
|     mat|
+--------+