在组内操作并填充其他列

时间:2017-12-10 20:29:56

标签: scala apache-spark-sql

我有一个如下数据框:

+------+------+---+------+
|field1|field2|id |Amount|
+------+------+---+------+
|A     |B     |002|10.0  |
|A     |B     |003|12.0  |
|A     |B     |005|15.0  |
|C     |B     |002|20.0  |
|C     |B     |003|22.0  |
|C     |B     |005|25.0  |
+------+------+---+------+

我需要将其转换为:

+------+------+---+-------+---+-------+---+-------+
|field1|field2|002|002_Amt|003|003_Amt|005|005_Amt|
+------+------+---+-------+---+-------+---+-------+
|A     |B     |002|10.0   |003|12.0   |005|15.0   |
|C     |B     |002|20.0   |003|22.0   |005|25.0   |
+------+------+---+-------+---+-------+---+-------+

请指教!

1 个答案:

答案 0 :(得分:1)

您的最终dataframe列取决于id列,因此您需要将不同ID 存储在单独的array中。

import org.apache.spark.sql.functions._
val distinctIds = df.select(collect_list("id")).rdd.first().get(0).asInstanceOf[mutable.WrappedArray[String]].distinct

下一步是filterdistinctIds每个join

val first = distinctIds.head
var finalDF = df.filter($"id" === first).withColumnRenamed("id", first).withColumnRenamed("Amount", first+"_Amt")
for(str <- distinctIds.tail){
  var tempDF = df.filter($"id" === str).withColumnRenamed("id", str).withColumnRenamed("Amount", str+"_Amt")
  finalDF = finalDF.join(tempDF, Seq("field1", "field2"), "left")
}
finalDF.show(false)

您应该将所需的输出设为

+------+------+---+-------+---+-------+---+-------+
|field1|field2|002|002_Amt|003|003_Amt|005|005_Amt|
+------+------+---+-------+---+-------+---+-------+
|A     |B     |002|10.0   |003|12.0   |005|15.0   |
|C     |B     |002|20.0   |003|22.0   |005|25.0   |
+------+------+---+-------+---+-------+---+-------+
scala 永远不建议使用

Var。因此,您可以创建一个递归函数来执行上面的逻辑,如下所示

def getFinalDF(first: Boolean, array: List[String], df: DataFrame, tdf: DataFrame) : DataFrame = array match {
  case head :: tail => {
    if(first) {
      getFinalDF(false, tail, df, df.filter($"id" === head).withColumnRenamed("id", head).withColumnRenamed("Amount", head + "_Amt"))
    }
    else{
      val tempDF = df.filter($"id" === head).withColumnRenamed("id", head).withColumnRenamed("Amount", head+"_Amt")
      getFinalDF(false, tail, df, tdf.join(tempDF, Seq("field1", "field2"), "left"))
    }
  }
  case Nil => tdf
}

并将递归函数称为

getFinalDF(true, distinctIds.toList, df, df).show(false)

你应该有相同的输出。