我有一个Dataframe df1格式如下:
+--------------------------+
|DateInfos |
+--------------------------+
|[[3, A, 111], [4, B, 222]]|
|[[1, C, 333], [2, D, 444]]|
|[[5, E, 555]] |
+--------------------------+
我想用分隔符“ - ”(df2)连接每个tuples3的第二个和第三个元素:
+------------------------+
|DateInfos |
+------------------------+
|[[3, A-111], [4, B-222]]|
|[[1, C-333], [2, D-444]]|
|[[5, E-555]] |
+------------------------+
我打印df1的架构:
root
|-- DateInfos: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- _1: integer (nullable = false)
| | |-- _2: string (nullable = true)
| | |-- _3: string (nullable = true)
我假设我必须创建一个使用具有以下签名的函数的udf:
def concatDF1(array: Array[(Int, String, String)]): Array[(Int, String)] = {
val res = Array.map(elem => (elem._1, elem._2 + "-" + elem._3)).toArray
res
}
我执行这样的方法:
val concat_udf = sqlContext.udf.register("concat_udf", concat _)
val df2_temp = df1.withColumn("DataInfos_temp",concat_udf(df1("DataInfos")))
val df2 = df2_temp.drop("DataInfos").withColumnRenamed("DataInfos_temp", "DataInfos")
我收到此错误:
Caused by: org.apache.spark.SparkException: Failed to execute user defined function(anonfun$4: (array<struct<_1:int,_2:string,_3:string>>) => array<struct<_1:int,_2:string>>)
你有什么想法吗?
答案 0 :(得分:1)
这应该做的工作:
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
val sparkSession = ...
import sparkSession.implicits._
val input = sc.parallelize(Seq(
Seq((3, "A", 111), (4, "B", 222)),
Seq((1, "C", 333), (2, "D", 444)),
Seq((5, "E", 555))
)).toDF("DateInfos")
val concatElems = udf { seq: Seq[Row] =>
seq.map { case Row(x: Int, y: String, z: Int) =>
(x, s"$y-$z")
}
}
val output = input.select(concatElems($"DateInfos").as("DateInfos"))
output.show(truncate = false)
哪个输出:
+----------------------+
|DateInfos |
+----------------------+
|[[3,A-111], [4,B-222]]|
|[[1,C-333], [2,D-444]]|
|[[5,E-555]] |
+----------------------+