Spark数据帧 - 如何用连续的整数值填充空值?

时间:2016-05-06 14:29:41

标签: python apache-spark dataframe pyspark spark-dataframe

假设我有一个像这样的pyspark数据框:

KEY    VALUE
---    -----
623    "cat"
245    "dog"
null   "horse"
null   "pig"
331    "narwhal"
null   "snake"

如何转换此数据框,以便null列中的所有KEY值都替换为从1开始的整数序列?期望的结果如下:

KEY    VALUE
---    -----
623    "cat"
245    "dog"
1      "horse"
2      "pig"
331    "narwhal"
3      "snake"

1 个答案:

答案 0 :(得分:6)

我知道你问过Python,但也许Scala中的等价物会有所帮助。基本上,您希望将Window函数rank与函数coalesce一起使用。首先我们定义一些测试数据:

val df = Seq(
  (Option(623), "cat"),
  (Option(245),"dog"),
  (None, "horse"),
  (None, "pig"),
  (Option(331), "narwhal"),
  (None, "snake")
).toDF("key","value")

然后我们将rank key的所有实例,然后我们将使用coalesce选择原始key或新rank,然后删除我们创建的rank列以清理它:

import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._

val window = Window.partitionBy(col("key")).orderBy(col("value"))
df.withColumn("rank", rank.over(window))
  .withColumn("key", coalesce(col("key"),col("rank")))
  .drop("rank")