在pyspark中将RDD转换为Dataframe

时间:2018-04-04 22:31:36

标签: python apache-spark dataframe pyspark rdd

我正在尝试将我的RDD转换为pyspark中的Dataframe。

我的RDD:

[(['abc', '1,2'], 0), (['def', '4,6,7'], 1)]

我希望RDD采用Dataframe的形式:

Index Name Number
 0    abc   [1,2]
 1    def   [4,6,7]

我试过了:

rd2=rd.map(lambda x,y: (y, x[0] , x[1]) ).toDF(["Index", "Name" , "Number"])

但我收到错误

 An error occurred while calling 
z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 62.0 failed 1 times, most recent failure: Lost task 0.0 
in stage 62.0 (TID 88, localhost, executor driver): 
org.apache.spark.api.python.PythonException: Traceback (most recent 
call last):

你能让我知道吗,我哪里出错?

更新

rd2=rd.map(lambda x: (x[1], x[0][0] , x[0][1]))

我有以下形式的RDD:

[(0, 'abc', '1,2'), (1, 'def', '4,6,7')]

要转换为Dataframe:

rd2.toDF(["Index", "Name" , "Number"])

它仍然给我错误:

An error occurred while calling o2271.showString.
: java.lang.IllegalStateException: SparkContext has been shutdown
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2021)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050)

1 个答案:

答案 0 :(得分:1)

RDD.map采用一元函数:

rdd.map(lambda x: (x[1], x[0][0] , x[0][1])).toDF(["Index", "Name" , "Number"])

所以你不能传递二进制文件。

如果要分割数组:

rdd.map(lambda x: (x[1], x[0][0] , x[0][1].split(","))).toDF(["Index", "Name" , "Number"])