我想在Spark SQL表中转置多列
我只为两列找到了该解决方案,我想知道如何与三列varA, varB and varC.
import org.apache.spark.sql.functions.{udf, explode}
val zip = udf((xs: Seq[Long], ys: Seq[Long]) => xs.zip(ys))
df.withColumn("vars", explode(zip($"varA", $"varB"))).select(
$"userId", $"someString",
$"vars._1".alias("varA"), $"vars._2".alias("varB")).show
这是我的数据框架构:
`root
|-- owningcustomerid: string (nullable = true)
|-- event_stoptime: string (nullable = true)
|-- balancename: string (nullable = false)
|-- chargedvalue: string (nullable = false)
|-- newbalance: string (nullable = false)
`
我尝试了以下代码:
val zip = udf((xs: Seq[String], ys: Seq[String], zs: Seq[String]) => (xs, ys, zs).zipped.toSeq)
df.printSchema
val df4=df.withColumn("vars", explode(zip($"balancename", $"chargedvalue",$"newbalance"))).select(
$"owningcustomerid", $"event_stoptime",
$"vars._1".alias("balancename"), $"vars._2".alias("chargedvalue"),$"vars._2".alias("newbalance"))
我遇到了这个错误:
cannot resolve 'UDF(balancename, chargedvalue, newbalance)' due to data type mismatch: argument 1 requires array<string> type, however, '`balancename`' is of string type. argument 2 requires array<string> type, however, '`chargedvalue`' is of string type. argument 3 requires array<string> type, however, '`newbalance`' is of string type.;;
'项目[owningcustomerid#1085,event_stoptime#1086,balancename#1159,chargedvalue#1160,newbalance#1161,explode(UDF(balancename#1159,chargedvalue#1160,newbalance#1161))AS vars#1167] >
答案 0 :(得分:1)
通常在Scala中,您可以使用Tuple3.zipped
val zip = udf((xs: Seq[Long], ys: Seq[Long], zs: Seq[Long]) =>
(xs, ys, zs).zipped.toSeq)
zip($"varA", $"varB", $"varC")
特别是在Spark SQL(> = 2.4)中,您可以使用arrays_zip
函数:
import org.apache.spark.sql.functions.arrays_zip
arrays_zip($"varA", $"varB", $"varC")
但是,您必须注意,您的数据不包含array<string>
而是纯strings
-因此不允许Spark arrays_zip
或爆炸,您应该先解析数据。