如何使用udf更新包含数组的spark dataframe列

时间:2019-10-29 06:06:43

标签: scala apache-spark apache-spark-sql user-defined-functions

我有一个数据框:

+--------------------+------+
|people              |person|
+--------------------+------+
|[[jack, jill, hero]]|joker |
+--------------------+------+

它是模式:

root
 |-- people: struct (nullable = true)
 |    |-- person: array (nullable = true)
 |    |    |-- element: string (containsNull = true)
 |-- person: string (nullable = true)

在这里,root –person是一个字符串。因此,我可以使用udf将字段更新为:

def updateString = udf((s: String) => {
    "Mr. " + s
})
df.withColumn("person", updateString(col("person"))).select("person").show(false)

输出:

+---------+
|person   |
+---------+
|Mr. joker|
+---------+

我想对包含人员数组的root--people--person列执行相同的操作。如何使用udf实现此目标?

def updateArray = udf((arr: Seq[Row]) => ???
df.withColumn("people", updateArray(col("people.person"))).select("people").show(false)

预期:

+------------------------------+
|people                        |
+------------------------------+
|[Mr. hero, Mr. jack, Mr. jill]|
+------------------------------+

编辑:我还想在更新root--people--person之后保留其架构。

预期的人员模式:

df.select("people").printSchema()

root
 |-- people: struct (nullable = false)
 |    |-- person: array (nullable = true)
 |    |    |-- element: string (containsNull = true)

谢谢

3 个答案:

答案 0 :(得分:1)

因为您只需要更新功能,一切都保持不变。 这是代码段。

scala> df2.show
+------+------------------+
|people|            person|
+------+------------------+
| joker|[jack, jill, hero]|
+------+------------------+
//jus order is changed
I just updated your function instead of using Row I am using here Seq[String]

scala> def updateArray = udf((arr: Seq[String]) => arr.map(x=>"Mr."+x))
scala> df2.withColumn("test",updateArray($"person")).show(false)
+------+------------------+---------------------------+
|people|person            |test                       |
+------+------------------+---------------------------+
|joker |[jack, jill, hero]|[Mr.jack, Mr.jill, Mr.hero]|
+------+------------------+---------------------------+
//keep all the column for testing purpose you could drop if you dont want.

如果您想进一步了解相同内容,请告诉我。

答案 1 :(得分:1)

这里的问题是people是只有1个字段的结构。在您的UDF中,您需要返回Tuple1,然后进一步转换UDF的输出以保持名称正确:

def updateArray = udf((r: Row) => Tuple1(r.getAs[Seq[String]](0).map(x=>"Mr."+x)))

val newDF = df
  .withColumn("people",updateArray($"people").cast("struct<person:array<string>>"))

newDF.printSchema()
newDF.show()

给予

root
 |-- people: struct (nullable = true)
 |    |-- person: array (nullable = true)
 |    |    |-- element: string (containsNull = true)
 |-- person: string (nullable = true)


+--------------------+------+
|              people|person|
+--------------------+------+
|[[Mr.jack, Mr.jil...| joker|
+--------------------+------+

答案 2 :(得分:0)

让我们创建数据进行测试

scala> val data = Seq((List(Array("ja", "ji", "he")), "person")).toDF("people", "person")
data: org.apache.spark.sql.DataFrame = [people: array<array<string>>, person: string]

scala> data.printSchema
root
 |-- people: array (nullable = true)
 |    |-- element: array (containsNull = true)
 |    |    |-- element: string (containsNull = true)
 |-- person: string (nullable = true)

创建符合我们要求的UDF

scala> def arrayConcat(array:Seq[Seq[String]], str: String) = array.map(_.map(str + _))
arrayConcat: (array: Seq[Seq[String]], str: String)Seq[Seq[String]]

scala> val arrayConcatUDF = udf(arrayConcat _)
arrayConcatUDF: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function2>,ArrayType(ArrayType(StringType,true),true),Some(List(ArrayType(ArrayType(StringType,true),true), StringType)))

应用udf

scala> data.withColumn("dasd", arrayConcatUDF($"people", lit("Mr."))).show(false)
+--------------------------+------+-----------------------------------+
|people                    |person|dasd                               |
+--------------------------+------+-----------------------------------+
|[WrappedArray(ja, ji, he)]|person|[WrappedArray(Mr.ja, Mr.ji, Mr.he)]|
+--------------------------+------+-----------------------------------+

您可能需要进行一些微调(我认为几乎不需要任何微调),但这包含了解决问题的大部分方法