我想将以下数据框中的所有n/a
值替换为unknown
。
它可以是scalar
或complex nested column
。
如果它是StructField column
,则可以遍历各列,并使用n\a
替换WithColumn
。
但是,尽管本专栏的generic way
,我还是希望在type
中完成此操作
因为我不想明确指定列名,因为在我的情况下有100个?
case class Bar(x: Int, y: String, z: String)
case class Foo(id: Int, name: String, status: String, bar: Seq[Bar])
val df = spark.sparkContext.parallelize(
Seq(
Foo(123, "Amy", "Active", Seq(Bar(1, "first", "n/a"))),
Foo(234, "Rick", "n/a", Seq(Bar(2, "second", "fifth"),Bar(22, "second", "n/a"))),
Foo(567, "Tom", "null", Seq(Bar(3, "second", "sixth")))
)).toDF
df.printSchema
df.show(20, false)
结果:
+---+----+------+---------------------------------------+
|id |name|status|bar |
+---+----+------+---------------------------------------+
|123|Amy |Active|[[1, first, n/a]] |
|234|Rick|n/a |[[2, second, fifth], [22, second, n/a]]|
|567|Tom |null |[[3, second, sixth]] |
+---+----+------+---------------------------------------+
预期输出:
+---+----+----------+---------------------------------------------------+
|id |name|status |bar |
+---+----+----------+---------------------------------------------------+
|123|Amy |Active |[[1, first, unknown]] |
|234|Rick|unknown |[[2, second, fifth], [22, second, unknown]] |
|567|Tom |null |[[3, second, sixth]] |
+---+----+----------+---------------------------------------------------+
对此有何建议?
答案 0 :(得分:1)
如果您喜欢玩RDD,这是一个简单,通用和改进的解决方案:
val naToUnknown = {r: Row =>
def rec(r: Any): Any = {
r match {
case row: Row => Row.fromSeq(row.toSeq.map(rec))
case seq: Seq[Any] => seq.map(rec)
case s: String if s == "n/a" => "unknown"
case _ => r
}
}
Row.fromSeq(r.toSeq.map(rec))
}
val newDF = spark.createDataFrame(df.rdd.map{naToUnknown}, df.schema)
newDF.show(false)
输出:
+---+----+-------+-------------------------------------------+
|id |name|status |bar |
+---+----+-------+-------------------------------------------+
|123|Amy |Active |[[1, first, unknown]] |
|234|Rick|unknown|[[2, second, fifth], [22, second, unknown]]|
|567|Tom |null |[[3, second, sixth]] |
+---+----+-------+-------------------------------------------+
答案 1 :(得分:0)
您可以定义一个UDF来处理您的数组并替换所需的项:
UDF
val replaceNA = udf((x:Row) => {
val z = x.getString(2)
if ( z == "n/a")
Bar(x.getInt(0), x.getString(1), "unknow")
else
Bar(x.getInt(0), x.getString(1), x.getString(2))
})
一旦有了UDF,就可以展开数据框,以将栏中的每个项目都排成一行:
val explodedDF = df.withColumn("exploded", explode($"bar"))
+---+----+------+--------------------+------------------+
| id|name|status| bar| exploded|
+---+----+------+--------------------+------------------+
|123| Amy|Active| [[1, first, n/a]]| [1, first, n/a]|
|234|Rick| n/a|[[2, second, fift...|[2, second, fifth]|
|234|Rick| n/a|[[2, second, fift...| [22, second, n/a]|
|567| Tom| null|[[3, second, sixth]]|[3, second, sixth]|
+---+----+------+--------------------+------------------+
然后应用先前定义的UDF 替换项目:
val replacedDF = explodedDF.withColumn("exploded", replaceNA($"exploded"))
+---+----+------+--------------------+--------------------+
| id|name|status| bar| exploded|
+---+----+------+--------------------+--------------------+
|123| Amy|Active| [[1, first, n/a]]| [1, first, unknow]|
|234|Rick| n/a|[[2, second, fift...| [2, second, fifth]|
|234|Rick| n/a|[[2, second, fift...|[22, second, unknow]|
|567| Tom| null|[[3, second, sixth]]| [3, second, sixth]|
+---+----+------+--------------------+--------------------+
最后是分组和 collect_list 一起将其恢复为原始状态
val resultDF = replacedDF.groupBy("id", "name", "status")
.agg(collect_list("exploded").as("bar")).show(false)
+---+----+------+------------------------------------------+
|id |name|status|bar |
+---+----+------+------------------------------------------+
|234|Rick|n/a |[[2, second, fifth], [22, second, unknow]]|
|567|Tom |null |[[3, second, sixth]] |
|123|Amy |Active|[[1, first, unknow]] |
+---+----+------+------------------------------------------+
将Al整合在一起:
import org.apache.spark.sql._
val replaceNA = udf((x:Row) => {
val z = x.getString(2)
if ( z == "n/a")
Bar(x.getInt(0), x.getString(1), "unknow")
else
Bar(x.getInt(0), x.getString(1), x.getString(2))
})
df.withColumn("exploded", explode($"bar"))
.withColumn("exploded", replaceNA($"exploded"))
.groupBy("id", "name", "status")
.agg(collect_list("exploded").as("bar"))
答案 2 :(得分:0)
只有简单的列和结构时,替换嵌套值很容易。 对于数组字段,在替换或使用UDF /高阶函数之前,必须先分解结构,请参阅我的其他答案here。
您可以定义一个遍历DataFrame模式的通用函数
并应用lambda函数func
替换您想要的内容:
def replaceNestedValues(schema: StructType, func: Column => Column, path: Option[String] = None): Seq[Column] = {
schema.fields.map(f => {
val p = path.fold(s"`${f.name}`")(c => s"$c.`${f.name}`")
f.dataType match {
case s: StructType => struct(replaceNestedValues(s, func, Some(p)): _*).alias(f.name)
case _ => func(col(p)).alias(f.name)
}
})
}
在使用此功能之前,应像这样爆炸数组结构bar
:
val df2 = df.select($"id", $"name", $"status", explode($"bar").alias("bar"))
然后,定义一个带列的lambda函数,并使用unknown
函数将其等于n/a
时将其替换为when/otherwise
,并使用上述函数将转换应用于列:
val replaceNaFunc: Column => Column = c => when(c === lit("n/a"), lit("unknown")).otherwise(c)
val replacedCols = replaceNestedValues(df2.schema, replaceNaFunc)
选择新列和groupBy以返回bar
数组:
df2.select(replacedCols: _*).groupBy($"id", $"name", $"status").agg(collect_list($"bar").alias("bar")).show(false)
礼物:
+---+----+-------+-------------------------------------------+
|id |name|status |bar |
+---+----+-------+-------------------------------------------+
|234|Rick|unknown|[[2, second, fifth], [22, second, unknown]]|
|123|Amy |Active |[[1, first, unknown]] |
|567|Tom |null |[[3, second, sixth]] |
+---+----+-------+-------------------------------------------+