如何在Pyspark中使用collect()方法将pyspark.rdd.PipelinedRDD转换为数据框?

时间:2018-01-05 09:53:53

标签: python-3.x apache-spark pyspark apache-spark-sql spark-dataframe

我有pyspark.rdd.PipelinedRDD (Rdd1)。 当我在做Rdd1.collect()时,它会给出如下结果。

 [(10, {3: 3.616726727464709, 4: 2.9996439803387602, 5: 1.6767412921625855}),
 (1, {3: 2.016527311459324, 4: -1.5271512313750577, 5: 1.9665475696370045}),
 (2, {3: 6.230272144805092, 4: 4.033642544526678, 5: 3.1517805604906313}),
 (3, {3: -0.3924680103722977, 4: 2.9757316477407443, 5: -1.5689126834176417})]

现在我想使用collect()方法将pyspark.rdd.PipelinedRDD转换为数据框

我的最终数据框应该如下所示.df.show()应该像:

+----------+-------+-------------------+
|CId       |IID    |Score              |
+----------+-------+-------------------+
|10        |4      |2.9996439803387602 |
|10        |5      |1.6767412921625855 |
|10        |3      |3.616726727464709  |
|1         |4      |-1.5271512313750577|
|1         |5      |1.9665475696370045 |
|1         |3      |2.016527311459324  |
|2         |4      |4.033642544526678  |
|2         |5      |3.1517805604906313 |
|2         |3      |6.230272144805092  |
|3         |4      |2.9757316477407443 |
|3         |5      |-1.5689126834176417|
|3         |3      |-0.3924680103722977|
+----------+-------+-------------------+

我可以实现这个转换为rdd,然后应用collect(),迭代和最后的Data框架。

但是现在我想使用任何collect()方法将pyspark.rdd.PipelinedRDD(RDD1)转换为数据帧。

请让我知道如何实现这一目标?

4 个答案:

答案 0 :(得分:2)

你想在这里做两件事: 1.压平你的数据 2.将其放入数据框

一种方法如下:

首先,让我们弄平字典:

@PostMapping("/upload/testCase")
public void uploadTestCase(@RequestParam("fileUpload") MultipartFile file) {
    System.out.println(file);
}

收集数据时,您会得到以下内容:

rdd2 = Rdd1.flatMapValues(lambda x : [ (k, x[k]) for k in x.keys()])

然后我们可以格式化数据并将其转换为数据帧:

[(10, (3, 3.616726727464709)), (10, (4, 2.9996439803387602)), ...

给你这个:

rdd2.map(lambda x : (x[0], x[1][0], x[1][1]))\
    .toDF(("CId", "IID", "Score"))\
    .show()

答案 1 :(得分:1)

这是你用scala

的方法
 Drawable drawable=ContextCompat.getDrawable(HelloService.this,R.drawable.ic_launcher_round);

输出:

  val Rdd1 = spark.sparkContext.parallelize(Seq(
    (10, Map(3 -> 3.616726727464709, 4 -> 2.9996439803387602, 5 -> 1.6767412921625855)),
    (1, Map(3 -> 2.016527311459324, 4 -> -1.5271512313750577, 5 -> 1.9665475696370045)),
    (2, Map(3 -> 6.230272144805092, 4 -> 4.033642544526678, 5 -> 3.1517805604906313)),
    (3, Map(3 -> -0.3924680103722977, 4 -> 2.9757316477407443, 5 -> -1.5689126834176417))
  ))

  val x = Rdd1.flatMap(x => (x._2.map(y => (x._1, y._1, y._2))))
         .toDF("CId", "IId", "score")

希望你能转换为pyspark。

答案 2 :(得分:1)

有一个更简单,更优雅的解决方案,避免使用python lambda表达式,如@oli回答,它依赖于Spark DataFrames的explode,完全符合您的要求。它应该更快,因为不需要使用python lambda两次。见下文:

from pyspark.sql.functions import explode

# dummy data
data = [(10, {3: 3.616726727464709, 4: 2.9996439803387602, 5: 1.6767412921625855}),
        (1, {3: 2.016527311459324, 4: -1.5271512313750577, 5: 1.9665475696370045}),
        (2, {3: 6.230272144805092, 4: 4.033642544526678, 5: 3.1517805604906313}),
        (3, {3: -0.3924680103722977, 4: 2.9757316477407443, 5: -1.5689126834176417})]

# create your rdd
rdd = sc.parallelize(data)

# convert to spark data frame
df = rdd.toDF(["CId", "Values"])

# use explode
df.select("CId", explode("Values").alias("IID", "Score")).show()

+---+---+-------------------+
|CId|IID|              Score|
+---+---+-------------------+
| 10|  3|  3.616726727464709|
| 10|  4| 2.9996439803387602|
| 10|  5| 1.6767412921625855|
|  1|  3|  2.016527311459324|
|  1|  4|-1.5271512313750577|
|  1|  5| 1.9665475696370045|
|  2|  3|  6.230272144805092|
|  2|  4|  4.033642544526678|
|  2|  5| 3.1517805604906313|
|  3|  3|-0.3924680103722977|
|  3|  4| 2.9757316477407443|
|  3|  5|-1.5689126834176417|
+---+---+-------------------+

答案 3 :(得分:0)

确保首先创建Spark会话:

sc = SparkContext()
spark = SparkSession(sc)

当我尝试解决这个确切的问题时,我找到了这个答案。
'PipelinedRDD' object has no attribute 'toDF' in PySpark