Spark:如何将多行转换为多列单行?

时间:2018-09-20 10:13:34

标签: pyspark apache-spark-sql pyspark-sql

注意:这只是一个简单的示例数据。与实际的板球队相比,这毫无意义。

我有一个JSON文件,如下所示:

C:\xampp\htdocs\employee\employeesubmit.php

我用下面的代码对此进行了分解:

{
  "someID": "a5cf4922f4e3f45",
  "payload": {
    "teamID": "1",
    "players": [
      {
        "type": "Batsman",
        "name": "Amar",
        "address": {
          "state": "Gujarat"
        }
      },
      {
        "type": "Bowler",
        "name": "Akbar",
        "address": {
          "state": "Telangana"
        }
      },
      {
        "type": "Fielder",
        "name": "Antony",
        "address": {
          "state": "Kerala"
        }
      }
    ]
  }
}

所以当前输出看起来像:

df_record = spark.read.json("path-to-file.json",multiLine=True)

df_player_dtls = df_record.select("payload.teamID", explode("payload.players").alias("xplayers")) \
                          .select("teamID", \
                                  "xplayers.type", \
                                  "xplayers.name", \
                                  "xplayers.address.state")

df_player_dtls.createOrReplaceTempView("t_player_dtls")

spark.sql("SELECT * FROM t_player_dtls").show()

我想将其转换为以下所示格式:

+--------+---------+--------+------------+
| TeamID |  Type   |  Name  |   State    |
+--------+---------+--------+------------+
|      1 | Batsman | Amar   | Gujarat    |
|      1 | Bowler  | Akbar  | Telangana  |
|      1 | Fielder | Antony | Kerala     |
|      2 | Batsman | John   | Queensland |
|      2 | Bowler  | Smith  | Perth      |
+--------+---------+--------+------------+

一个团队中只有每种类型的一名球员,并且每个团队中最多只能有四种类型的球员 Batsman,Bowler, Fielder和Wicketkeeper )。因此,每个团队中最多只能有四个玩家。因此,用于保存此数据的决赛桌有9列(其中一个代表团队ID,四个代表姓名和状态)。

是否可以在Spark中完成此操作?我是Spark的新手,因此非常感谢您解释这些步骤。

2 个答案:

答案 0 :(得分:2)

我们可以使用pyspark的数据透视功能

from pyspark.sql.functions import first

df = df_player_dtls.groupBy("TeamID").pivot("Type").agg(
                            first('Name').alias('Name'),
                            first("State").alias("State"))
df.show(10,False)

答案 1 :(得分:1)

使用SQL可能,这不是最有效的方法(UDF应该是),但是它可以工作。很抱歉这是Scala式的。

val res = spark.sql(
        """select teamID
          |, Batsman.name as `Batsman.name`, Batsman.state as `Batsman.state`
          |, Bowler.name as `Bowler.name`, Bowler.state as `Bowler.state`
          |, Fielder.name as `Fielder.name`, Fielder.state as `Fielder.state`
          |from (
          |   select teamID,
          |     max(case type when 'Batsman' then info end) as Batsman
          |     , max(case type when 'Bowler' then info end) as Bowler
          |     , max(case type when 'Fielder' then info end) as Fielder
          |     from (select teamID, type, struct(name, state) as info from t_player_dtls) group by teamID
      |)""".stripMargin)

我使用group by将数据围绕着teamID列进行旋转,并且 max 将选择一个不为null的值, case 语句将只允许一条记录进入 max 。为了简化最大用例组合,我使用了 struct 函数,该函数创建了一个由有效载荷组成的复合列 info ,我们稍后希望将其提升为平面架构。

UDF本来会更有效,但是我对python不熟悉。

更新 两种解决方案(SQL和数据透视)都使用 explode groupBy 组合,@ Anshuman易于编写,并且具有以下执行计划:

SQL

== Physical Plan ==
SortAggregate(key=[teamID#10], functions=[max(CASE WHEN (type#16 = Batsman) THEN info#31 END), max(CASE WHEN (type#16 = Bowler) THEN info#31 END), max(CASE WHEN (type#16 = Fielder) THEN info#31 END)])
+- *Sort [teamID#10 ASC NULLS FIRST], false, 0
   +- Exchange hashpartitioning(teamID#10, 200)
      +- SortAggregate(key=[teamID#10], functions=[partial_max(CASE WHEN (type#16 = Batsman) THEN info#31 END), partial_max(CASE WHEN (type#16 = Bowler) THEN info#31 END), partial_max(CASE WHEN (type#16 = Fielder) THEN info#31 END)])
     +- *Sort [teamID#10 ASC NULLS FIRST], false, 0
        +- *Project [payload#4.teamID AS teamID#10, xplayers#12.type AS type#16, named_struct(name, xplayers#12.name, state, xplayers#12.address.state) AS info#31]
           +- Generate explode(payload#4.players), true, false, [xplayers#12]
              +- *Project [payload#4]
                 +- Scan ExistingRDD[payload#4,someID#5]

PIVOT

== Physical Plan ==
SortAggregate(key=[TeamID#10], functions=[first(if ((Type#16 <=> Batsman)) Name#17 else null, true), first(if ((Type#16 <=> Batsman)) State#18 else null, true), first(if ((Type#16 <=> Bowler)) Name#17 else null, true), first(if ((Type#16 <=> Bowler)) State#18 else null, true), first(if ((Type#16 <=> Fielder)) Name#17 else null, true), first(if ((Type#16 <=> Fielder)) State#18 else null, true)])
+- *Sort [TeamID#10 ASC NULLS FIRST], false, 0
  +- Exchange hashpartitioning(TeamID#10, 200)
  +- SortAggregate(key=[TeamID#10], functions=[partial_first(if ((Type#16 <=> Batsman)) Name#17 else null, true), partial_first(if ((Type#16 <=> Batsman)) State#18 else null, true), partial_first(if ((Type#16 <=> Bowler)) Name#17 else null, true), partial_first(if ((Type#16 <=> Bowler)) State#18 else null, true), partial_first(if ((Type#16 <=> Fielder)) Name#17 else null, true), partial_first(if ((Type#16 <=> Fielder)) State#18 else null, true)])
     +- *Sort [TeamID#10 ASC NULLS FIRST], false, 0
        +- *Project [payload#4.teamID AS teamID#10, xplayers#12.type AS type#16, xplayers#12.name AS name#17, xplayers#12.address.state AS state#18]
           +- Generate explode(payload#4.players), true, false, [xplayers#12]
              +- *Project [payload#4]
                 +- Scan ExistingRDD[payload#4,someID#5]

这两个原因都引起了洗牌( Exchange哈希分区(TeamID#10,200)* )。

如果性能是您的目标,那么您可以使用这种Scala方法(我不知道Python)

import org.apache.spark.sql.functions._

  val df_record = spark.read.json(Seq(row_1, row_2).toDS)

  //Define your custom player types, as many as needed
  val playerTypes = Seq("Batsman", "Bowler", "Fielder")

  //Return type for the UDF
  val returnType = StructType(playerTypes.flatMap(t => Seq(StructField(s"$t.Name", StringType), StructField(s"$t.State", StringType))))

  val unpackPlayersUDF = udf( (players: Seq[Row]) => {
    val playerValues: Map[String, Row] = players.map(p => (p.getAs[String]("type"), p)).toMap
    val arrangedValues = playerTypes.flatMap { t =>
      val playerRow = playerValues.get(t) //if type does not exist, than value will be None, which is null
      Seq(
        playerRow.map(_.getAs[String]("name"))
        , playerRow.map(_.getAs[Row]("address").getAs[String]("state"))
      )
    }
    Row(arrangedValues: _*)
  }
  , returnType)

  val udfRes = df_record
    .withColumn("xplayers", unpackPlayersUDF($"payload.players"))
    .select("payload.teamID", "xplayers.*")

  udfRes.show(false)
  udfRes.explain()

输出:

+------+------------+-------------+-----------+------------+------------+-------------+
|teamID|Batsman.Name|Batsman.State|Bowler.Name|Bowler.State|Fielder.Name|Fielder.State|
+------+------------+-------------+-----------+------------+------------+-------------+
|1     |Amar        |Gujarat      |Akbar      |Telangana   |Antony      |Kerala       |
|1     |John        |Queensland   |Smith      |Perth       |null        |null         |
+------+------------+-------------+-----------+------------+------------+-------------+

具有以下执行计划:

== Physical Plan ==
*Project [payload#4.teamID AS teamID#46, UDF(payload#4.players).Batsman.Name AS Batsman.Name#40, UDF(payload#4.players).Batsman.State AS Batsman.State#41, UDF(payload#4.players).Bowler.Name AS Bowler.Name#42, UDF(payload#4.players).Bowler.State AS Bowler.State#43, UDF(payload#4.players).Fielder.Name AS Fielder.Name#44, UDF(payload#4.players).Fielder.State AS Fielder.State#45]
+- Scan ExistingRDD[payload#4,someID#5]

不涉及洗牌。如果要进一步提高性能,则可以在 spark.read.schem(SCHEMA).json 中添加显式的读取模式,这将进一步帮助您,因为读者不必推断模式,从而节省了时间。