如何在Dataframe Spark中添加按ID分组的索引

时间:2019-06-25 17:01:11

标签: scala apache-spark dataframe apache-spark-sql

我有这个数据框

+---------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+
|_id      |details__line_items                                                                                                                                                  |searchable_tags|
+---------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+
|307131663|[[line_item_1345678912345678M, {}, {},, loan, 1,, 116000], [line_item_123456789, {}, {},, Test, 1,, 1234567], [line_item_2kZgNnPXvEgnKCAaM, {}, {},, loan, 1,, 1234]]|[]             |
|040013496|[[line_item_1345678912345678M, {}, {},, loan, 1,, 116000], [line_item_123456789, {}, {},, Test, 1,, 1234567], [line_item_2kZgNnPXvEgnKCAaM, {}, {},, loan, 1,, 1234]]|[]             |
+---------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+

我正在使用此功能展开details__line_items列:

 def getArrayDataFrame(df: DataFrame): ListBuffer[DataFrame] = {
    df.schema
      .filter(field => {
        field.dataType.typeName == "array"
      })
      .map(field => {
        val explodeColumn = (colsName: String) =>
          df.withColumn("items", explode(df.col(s"${field.name}")))
            .select("_id", colsName)
        field.dataType match {
          case arrayType: ArrayType => {
            arrayType.elementType.typeName match {
              case "struct" => explodeColumn("items.*")
              case _        => explodeColumn(s"${field.name}")
            }
          }
        }
      })
      .to[ListBuffer]
  }

我正在获取此数据框:

+---------+---------------------------+--------------+---------------+-----------+----+--------+----+----------+
|_id      |_id                        |antifraud_info|contextual_data|description|name|quantity|sku |unit_price|
+---------+---------------------------+--------------+---------------+-----------+----+--------+----+----------+
|307131663|line_item_1345678912345678M|{}            |{}             |null       |loan|1       |null|116000    |
|307131663|line_item_123456789        |{}            |{}             |null       |Test|1       |null|1234567   |
|307131663|line_item_2kZgNnPXvEgnKCAaM|{}            |{}             |null       |loan|1       |null|1234      |
|040013496|line_item_1345678912345678M|{}            |{}             |null       |loan|1       |null|116000    |
|040013496|line_item_123456789        |{}            |{}             |null       |Test|1       |null|1234567   |
|040013496|line_item_2kZgNnPXvEgnKCAaM|{}            |{}             |null       |loan|1       |null|1234      |
+---------+---------------------------+--------------+---------------+-----------+----+--------+----+----------+

如何获得这样的新数据框?

+---------+---+---------------------------+-------------------+--------------------+----------------+---------+-------------+--------+---------------+
|_id      |index|_id                  |antifraud_info|contextual_data|description|name|quantity|sku|unit_price|
+---------+---+---------------------------+-------------------+--------------------+----------------+---------+-------------+--------+---------------+
|307131663|0  |line_item_1345678912345678M|{}                 |{}                  |null            |loan     |1            |null    |116000         |
|307131663|1  |line_item_123456789        |{}                 |{}                  |null            |Test     |1            |null    |1234567        |
|307131663|2  |line_item_2kZgNnPXvEgnKCAaM|{}                 |{}                  |null            |loan     |1            |null    |1234           |
|040013496|0  |line_item_1345678912345678M|{}                 |{}                  |null            |loan     |1            |null    |116000         |
|040013496|1  |line_item_123456789        |{}                 |{}                  |null            |Test     |1            |null    |1234567        |
|040013496|2  |line_item_2kZgNnPXvEgnKCAaM|{}                 |{}                  |null            |loan     |1            |null    |1234           |
+---------+---+---------------------------+-------------------+--------------------+----------------+---------+-------------+--------+---------------+

我已经尝试使用posexplode,但是它更改了我的数据框架构,添加了col和pos列,我这样修改了函数。

def getArrayDataFrame(df: DataFrame): ListBuffer[DataFrame] = {
    df.schema
      .filter(field => {
        field.dataType.typeName == "array"
      })
      .map{ (field) => {

        println(s"This is the name of the field ${field.name}")

        val testDF =  df.select($"_id", posexplode(df.col(s"${field.name}") ))

        testDF.printSchema()
        val newDF = testDF.select(flattenSchema(testDF.schema): _*)
        newDF.printSchema()
        newDF
      }}
      .to[ListBuffer]
  }

那么,如何在不更改Dataframe架构的情况下获取爆炸列的索引?

1 个答案:

答案 0 :(得分:1)

要为每个组添加索引列,请使用spark sql函数中的Window函数partitionBy()和row_number函数。

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._

df.withColumn("index", row_number(Window.partitionBy("col_to_groupBy").orderBy("some_col")))

row_number()函数将用递增的索引标记每一行,为此,对于每个组,我们使用 Window函数partitioBy()对DF进行分组列(col_to_groupBy)。每个组需要本身进行排序-因此我们使用orderBy来按某些列(some_col)进行排序。在您的示例中,顺序没有意义,因此您可以选择所需的任何列。