如何在不使用UDF的情况下将字符串列附加到Scala Spark中的数组字符串列?

时间:2019-05-18 15:35:22

标签: scala apache-spark

我有一个表,该表的列包含这样的数组-

Student_ID | Subject_List        | New_Subject
1          | [Mat, Phy, Eng]     | Chem

我想将新主题附加到主题列表中并获取新列表。

创建数据框-

val df = sc.parallelize(Seq((1, Array("Mat", "Phy", "Eng"), "Chem"))).toDF("Student_ID","Subject_List","New_Subject")

我已经按照如下方法使用UDF进行了尝试-

def append_list = (arr: Seq[String], s: String) => {
    arr :+ s
  }

val append_list_UDF = udf(append_list)

val df_new = df.withColumn("New_List", append_list_UDF($"Subject_List",$"New_Subject"))

使用UDF,我得到所需的输出

Student_ID | Subject_List        | New_Subject | New_List
1          | [Mat, Phy, Eng]     | Chem        | [Mat, Phy, Eng, Chem]

我们可以不用udf吗?谢谢。

2 个答案:

答案 0 :(得分:2)

在Spark 2.4或更高版本中,arrayconcat的组合应该可以解决问题,

import org.apache.spark.sql.functions.{array, concat}
import org.apache.spark.sql.Column

def append(arr: Column, col: Column) = concat(arr, array(col))

df.withColumn("New_List", append($"Subject_List",$"New_Subject")).show
+----------+---------------+-----------+--------------------+                   
|Student_ID|   Subject_List|New_Subject|            New_List|
+----------+---------------+-----------+--------------------+
|         1|[Mat, Phy, Eng]|       Chem|[Mat, Phy, Eng, C...|
+----------+---------------+-----------+--------------------+

但是我不希望在这里有明显的性能提升。

答案 1 :(得分:-1)

 val df = Seq((1, Array("Mat", "Phy", "Eng"), "Chem"),
  (2, Array("Hindi", "Bio", "Eng"), "IoT"),
  (3, Array("Python", "R", "scala"), "C")).toDF("Student_ID","Subject_List","New_Subject")
df.show(false)
val final_df = df.withColumn("exploded", explode($"Subject_List")).select($"Student_ID",$"exploded")
  .union(df.select($"Student_ID",$"New_Subject"))
  .groupBy($"Student_ID").agg(collect_list($"exploded") as "Your_New_List").show(false)
[enter code here][1]