数据框架结构:
| main_id| id| createdBy|
+------------+--------------------+--------------------+
|1 | [10,20,30]| [999,888,777|
|2 | [30]| [666]|
预期数据框架结构:
| main_id| id| createdBy|
+------------+--------------------+--------------------+
|1 10 999
|1 20 888
|1 30 777
|2 | 30| 666
Code_1尝试:
df.select($"main_id",explode($"id"),$"createdBy").select($"main_id",$"id",explode($"createdBy"))
导致错误的配对和重复。关于我应该调整什么以获得所需输出的任何建议。
此外,我尝试在第一个选择语句中使用多个爆炸,这会抛出错误。
Code_2尝试:
import org.apache.spark.sql.functions.{udf, explode}
val zip = udf((xs: Seq[String], ys: Seq[String]) => xs.zip(ys))
df.withColumn("vars", explode(zip($"id", $"createdBy"))).select(
$"main_id",
$"vars._1".alias("varA"), $"vars._2".alias("varB")).show(1)
警告和错误:
warning: there was one deprecation warning; re-run with -deprecation for details
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 564.0 failed 4 times, most recent failure: Lost task 0.3 in
stage 564.0 (TID 11570, ma4-csxp-ldn1015.corp.apple.com, executor 288)
是的,我已经问过同样的问题,因为重复指向另一个解决方案,这是我在代码片段2中尝试过的。它也没有用。任何建议都会非常有用。
答案 0 :(得分:1)
也许以下内容可以提供帮助:
val x = someDF.withColumn("createdByExploded", explode(someDF("createdBy"))).select("createdByExploded", "main_id")
val y = someDF.withColumn("idExploded", explode(someDF("id"))).select("idExploded", "main_id")
val xInd = x.withColumn("index", monotonically_increasing_id)
val yInd = y.withColumn("index", monotonically_increasing_id)
val joined = xInd.join(yInd, xInd("index") === yInd("index"), "outer").drop("index")
https://forums.databricks.com/questions/8180/how-to-merge-two-data-frames-column-wise-in-apache.html