如何计算多个浮点列的累加和?

时间:2020-01-31 12:43:50

标签: scala apache-spark apache-spark-sql

我在Dataframe中有100个按日期排序的浮点列。

ID   Date         C1       C2 ....... C100
1     02/06/2019   32.09  45.06         99
1     02/04/2019   32.09  45.06         99
2     02/03/2019   32.09  45.06         99
2     05/07/2019   32.09  45.06         99

我需要根据ID和日期将C1累积到C100。

目标数据框应如下所示:

ID   Date         C1       C2 ....... C100
1     02/04/2019   32.09  45.06         99
1     02/06/2019   64.18  90.12         198
2     02/03/2019   32.09  45.06         99
2     05/07/2019   64.18  90.12         198

我想实现这一目标而不需要从C1- C100循环播放。

一栏的初始代码:

var DF1 =  DF.withColumn("CumSum_c1", sum("C1").over(
         Window.partitionBy("ID")
        .orderBy(col("date").asc)))

我在这里找到了类似的问题,但他手动为两列做了此操作:Cumulative sum in Spark

2 个答案:

答案 0 :(得分:5)

它是foldLeft的经典用法。让我们先生成一些数据:

import org.apache.spark.sql.expressions._

val df = spark.range(1000)
            .withColumn("c1", 'id + 3)
            .withColumn("c2", 'id % 2 + 1)
            .withColumn("date", monotonically_increasing_id)
            .withColumn("id", 'id % 10 + 1)

// We will select the columns we want to compute the cumulative sum of.       
val columns = df.drop("id", "date").columns

val w = Window.partitionBy(col("id")).orderBy(col("date").asc) 

val results = columns.foldLeft(df)((tmp_, column) => tmp_.withColumn(s"cum_sum_$column", sum(column).over(w)))

results.orderBy("id", "date").show
// +---+---+---+-----------+----------+----------+
// | id| c1| c2|       date|cum_sum_c1|cum_sum_c2|
// +---+---+---+-----------+----------+----------+
// |  1|  3|  1|          0|         3|         1|
// |  1| 13|  1|         10|        16|         2|
// |  1| 23|  1|         20|        39|         3|
// |  1| 33|  1|         30|        72|         4|
// |  1| 43|  1|         40|       115|         5|
// |  1| 53|  1| 8589934592|       168|         6|
// |  1| 63|  1| 8589934602|       231|         7|

答案 1 :(得分:1)

这是使用简单选择表达式的另一种方式:

val w = Window.partitionBy($"id").orderBy($"date".asc).rowsBetween(Window.unboundedPreceding, Window.currentRow) 

// get columns you want to sum
val columnsToSum = df.drop("ID", "Date").columns

// map over those columns and create new sum columns
val selectExpr = Seq(col("ID"), col("Date")) ++ columnsToSum.map(c => sum(col(c)).over(w).alias(c)).toSeq

df.select(selectExpr:_*).show()

礼物:

+---+----------+-----+-----+----+                                               
| ID|      Date|   C1|   C2|C100|
+---+----------+-----+-----+----+
|  1|02/04/2019|32.09|45.06|  99|
|  1|02/06/2019|64.18|90.12| 198|
|  2|02/03/2019|32.09|45.06|  99|
|  2|05/07/2019|64.18|90.12| 198|
+---+----------+-----+-----+----+