根据特定条件更新数据框

时间:2019-09-03 04:26:47

标签: pyspark user-defined-functions pyspark-sql

我的数据框如下:

RankNumber  Value   Dept Number
  5          200    5
  4          200    5
  3          205    5
  2          198    5
  1          197    5
  5          200    6
  4          202    6
  3          205    6
  2          198    6
  1          194    6

我想从数据框中更新“值列”中的某些单元格。如果当前“值”大于先前的值,则应将其更新为先前的值。如果“值”等于或小于先前值,则应跳过。它已按部门编号分组。

我正在尝试在pyspark上执行此操作,但找不到实现该操作的方法。有人可以帮忙吗?

数据框的预期结果如下:

RankNumber  Value  Dept Number
  5         200     5
  4         200     5
  3         200     5 (record updated)
  2         198     5
  1         197     5
  5         200     6
  4         200     6 (record updated)
  3         200     6 (record updated)
  2         198     6
  1         194     6


1 个答案:

答案 0 :(得分:3)

我相信您的第8行将更新为“ 3 202 6 (record updated)”,而不是 '3 200 6 (record updated)'。因为它的先前值是'202',而当前值'205'大于先前的'202'。

from pyspark.sql.window import Window
import pyspark.sql.functions as F

w=Window.partitionBy("DeptNumber").orderBy(desc("RankNumber"))
df = df.withColumn('previous_value',F.coalesce(F.lag(df['value'],1).over(w),df['value']))

如果Value大于先前的值,则下面的代码将获得先前的值。

newdf = df.select(df.RankNumber,df.DeptNumber,df.Value,df.previous_value,when( df.Value<=df.previous_value, df.Value).otherwise(df.previous_value).alias('newValue'))

>>> newdf.show()
+----------+----------+-----+--------------+--------+
|RankNumber|DeptNumber|Value|previous_value|newValue|
+----------+----------+-----+--------------+--------+
|         5|         6|  200|           200|     200|
|         4|         6|  202|           200|     200|
|         3|         6|  205|           202|     202|
|         2|         6|  198|           205|     198|
|         1|         6|  194|           198|     194|
|         5|         5|  200|           200|     200|
|         4|         5|  200|           200|     200|
|         3|         5|  205|           200|     200|
|         2|         5|  198|           205|     198|
|         1|         5|  197|           198|     197|
+----------+----------+-----+--------------+--------+

下面的代码将获得先前值的最小值作为新值。

from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql.functions import desc,when,lit

w=Window.partitionBy("DeptNumber").orderBy(desc("RankNumber"))

df = df.withColumn('previous_value',F.coalesce(F.lag(df['value'],1).over(w),df['value']))

newdf = df.select(df.RankNumber,df.DeptNumber,df.Value,df.previous_value,when( df.Value<=df.previous_value, df.Value) \
                        .when(F.lag(df['previous_value'],1).over(w)<=df.previous_value, F.first(df.previous_value).over(w)) \
                        .otherwise(df.previous_value).alias('newValue'))


>>> newdf.show()
+----------+----------+-----+--------------+--------+
|RankNumber|DeptNumber|Value|previous_value|newValue|
+----------+----------+-----+--------------+--------+
|         5|         6|  200|           200|     200|
|         4|         6|  202|           200|     200|
|         3|         6|  205|           202|     200|
|         2|         6|  198|           205|     198|
|         1|         6|  194|           198|     194|
|         5|         5|  200|           200|     200|
|         4|         5|  200|           200|     200|
|         3|         5|  205|           200|     200|
|         2|         5|  198|           205|     198|
|         1|         5|  197|           198|     197|
+----------+----------+-----+--------------+--------+

如果您正在寻找的最低值正好高于该组的先前值,则需要更改这样的代码。

newdf = df.select(df.RankNumber,df.DeptNumber,df.Value,df.previous_value,when( df.Value<=df.previous_value, df.Value) \
                        .when(F.lag(df['previous_value'],1).over(w)<=df.previous_value, F.lag(df['previous_value'],1).over(w)) \
                        .otherwise(df.previous_value).alias('newValue'))

这将导致:

>>> newdf.show()
+----------+----------+-----+--------------+--------+
|RankNumber|DeptNumber|Value|previous_value|newValue|
+----------+----------+-----+--------------+--------+
|         5|     Dept2|  100|           100|     100|
|         4|     Dept2|  102|           100|     100|
|         3|     Dept2|  105|           102|     100|
|         2|     Dept2|  198|           105|     102|
|         1|     Dept2|  194|           198|     194|
|         5|     Dept1|  200|           200|     200|
|         4|     Dept1|  202|           200|     200|
|         3|     Dept1|  205|           202|     200|
|         2|     Dept1|  198|           205|     198|
|         1|     Dept1|  194|           198|     194|
+----------+----------+-----+--------------+--------+

更新: 现在,按照下面的注释部分所述创建一个新的数据框:

listOfTuples = [(5, 200, "Dept1"), (4, 202, "Dept1"), (3, 205, "Dept1"), (2, 198, "Dept1"), (1, 194, "Dept1") , (5, 100, "Dept2"), (4, 102, "Dept2"), (3, 105, "Dept2"), (2, 198, "Dept2"), (1, 194, "Dept2") ]

df = spark.createDataFrame(listOfTuples , ["RankNumber", "Value", "DeptNumber"])


>>> df.show()
+----------+-----+----------+
|RankNumber|Value|DeptNumber|
+----------+-----+----------+
|         5|  200|     Dept1|
|         4|  202|     Dept1|
|         3|  205|     Dept1|
|         2|  198|     Dept1|
|         1|  194|     Dept1|
|         5|  100|     Dept2|
|         4|  102|     Dept2|
|         3|  105|     Dept2|
|         2|  198|     Dept2|
|         1|  194|     Dept2|
+----------+-----+----------+

我相信您的意图是查看当前行与上一行之间的范围,并在满足第一个条件的情况下选择最小值。即:值大于先前的值。

w1=Window.partitionBy("DeptNumber").orderBy(desc("RankNumber"))
w2=Window.partitionBy("DeptNumber").orderBy(desc("RankNumber")).rowsBetween(Window.unboundedPreceding, Window.currentRow)

df = df.withColumn('previous_value',F.coalesce(F.lag(df['value'],1).over(w1),df['value']))

这是您的代码:

newdf = df.select(df.RankNumber,df.DeptNumber,df.Value,df.previous_value,when( df.Value<=df.previous_value, df.Value) \
                        .otherwise(F.min(df.previous_value).over(w2)).alias('newValue'))

>>> newdf.show()
+----------+----------+-----+--------------+--------+
|RankNumber|DeptNumber|Value|previous_value|newValue|
+----------+----------+-----+--------------+--------+
|         5|     Dept2|  100|           100|     100|
|         4|     Dept2|  102|           100|     100|
|         3|     Dept2|  105|           102|     100|
|         2|     Dept2|  198|           105|     100|
|         1|     Dept2|  194|           198|     194|
|         5|     Dept1|  200|           200|     200|
|         4|     Dept1|  202|           200|     200|
|         3|     Dept1|  205|           202|     200|
|         2|     Dept1|  198|           205|     198|
|         1|     Dept1|  194|           198|     194|
+----------+----------+-----+--------------+--------+