举个例子说我有一个df
from pyspark.sql import Row
row = Row("v", "x", "y", "z")
df = sc.parallelize([
row("p", 1, 2, 3.0), row("NULL", 3, "NULL", 5.0),
row("NA", None, 6, 7.0), row(float("Nan"), 8, "NULL", float("NaN"))
]).toDF()
现在我想用pyspark null(None)值替换NULL,NA和NaN。如何在多个列中实现它。
from pyspark.sql.functions import when, lit, col
def replace(column, value):
return when(column != value, column).otherwise(lit(None))
df = df.withColumn("v", replace(col("v"), "NULL"))
df = df.withColumn("v", replace(col("v"), "NaN"))
df = df.withColumn("v", replace(col("v"), "NaN"))
我试图避免为所有列写这个,因为我的数据框中可以包含任意数量的列。
感谢您的帮助。谢谢!
答案 0 :(得分:2)
循环遍历列,构建用null
替换特定字符串的列表达式,然后select
列:
df.show()
+----+----+----+---+
| v| x| y| z|
+----+----+----+---+
| p| 1| 2|3.0|
|NULL| 3|null|5.0|
| NA|null| 6|7.0|
| NaN| 8|null|NaN|
+----+----+----+---+
import pyspark.sql.functions as F
cols = [F.when(~F.col(x).isin("NULL", "NA", "NaN"), F.col(x)).alias(x) for x in df.columns]
df.select(*cols).show()
+----+----+----+----+
| v| x| y| z|
+----+----+----+----+
| p| 1| 2| 3.0|
|null| 3|null| 5.0|
|null|null| 6| 7.0|
|null| 8|null|null|
+----+----+----+----+