我想聚合一列并总结特定列的值。我想添加其中的一部分 相同的DataFrame用于进一步计算。
我以这种方式取得了成就
travelGroup = travel.groupBy("day" ).agg(sum("action").cast("int").alias('dayCount'))
travel = travel.join(travelGroup, ['day'], "left_outer").na.fill(0)
但是,想要检查这是正确的方法还是执行此过程的任何其他优化方式。
旅行 - 数据框
+---------+---+
| day | action
+---------
| TUE | 5
| WED | 7
| TUE | 2
| FRI | 1
| TUE | 6
| SUN | 3
结果 - 数据框
+---------+---+ +---+
| day | action | dayCount
+---------------------
| TUE | 5 | 13
| WED | 7 | 7
| TUE | 2 | 13
| FRI | 1 | 4
| TUE | 6 | 13
| FRI | 3 | 4
答案 0 :(得分:1)
您可以调整Window Function来执行此操作。 通过互联网找到的一个例子是 -
case class Salary(depName: String, empNo: Long, salary: Long)
val empsalary = Seq(
Salary("sales", 1, 5000),
Salary("personnel", 2, 3900),
Salary("sales", 3, 4800),
Salary("sales", 4, 4800),
Salary("personnel", 5, 3500),
Salary("develop", 7, 4200),
Salary("develop", 8, 6000),
Salary("develop", 9, 4500),
Salary("develop", 10, 5200),
Salary("develop", 11, 5200)).toDS
val byDepName = Window.partitionBy('depName)
empsalary.withColumn("avg", avg('salary) over byDepName).show
https://spark.apache.org/docs/2.2.0/api/java/org/apache/spark/sql/expressions/Window.html
Pyspark窗口函数 - https://www.arundhaj.com/blog/calculate-difference-with-previous-row-in-pyspark.html