如何在spark sql中调用带有多个参数(currying)的udf?

时间:2018-06-18 04:26:55

标签: scala apache-spark

如何使用火花数据框中的多个参数(currying)调用下面的UDF,如下所示。

读取并获取列表[String]

val data = sc.textFile("file.csv").flatMap(line => line.split("\n")).collect.toList

注册udf

val getValue = udf(Udfnc.getVal(_: Int, _: String, _: String)(_: List[String]))

在下面的df中调用udf

df.withColumn("value",
     getValue(df("id"),
        df("string1"),
        df("string2"))).show()

这里缺少List[String]参数,我真的不确定如何传递这个论点。

2 个答案:

答案 0 :(得分:1)

我可以根据您的问题对您的要求做出以下假设

a] UDF应该接受dataframe列以外的参数

b] UDF应该将多列作为参数

假设您想要从所有列连同指定参数的值。这是你如何做到的

import org.apache.spark.sql.functions._

def uDF(strList: List[String]) = udf[String, Int, String, String]((value1: Int, value2: String, value3: String) => value1.toString + "_" + value2 + "_" + value3 + "_" + strList.mkString("_"))

val df = spark.sparkContext.parallelize(Seq((1,"r1c1","r1c2"),(2,"r2c1","r2c2"))).toDF("id","str1","str2")

scala> df.show
+---+----+----+
| id|str1|str2|
+---+----+----+
|  1|r1c1|r1c2|
|  2|r2c1|r2c2|
+---+----+----+

val dummyList = List("dummy1","dummy2")
val result = df.withColumn("new_col", uDF(dummyList)(df("id"),df("str1"),df("str2")))



   scala> result.show(2, false)
+---+----+----+-------------------------+
|id |str1|str2|new_col                  |
+---+----+----+-------------------------+
|1  |r1c1|r1c2|1_r1c1_r1c2_dummy1_dummy2|
|2  |r2c1|r2c2|2_r2c1_r2c2_dummy1_dummy2|
+---+----+----+-------------------------+

答案 1 :(得分:1)

定义具有多个参数的 UDF:

val enrichUDF: UserDefinedFunction = udf((jsonData: String, id: Long) => {

      val lastOccurence = jsonData.lastIndexOf('}')
      val sid = ",\"site_refresh_stats_id\":" + id+ " }]"
      val enrichedJson = jsonData.patch(lastOccurence, sid, sid.length)

      enrichedJson

    })

将 udf 调用到现有数据帧:

val enrichedDF = EXISTING_DF
  .withColumn("enriched_column",
    enrichUDF(col("jsonData")
      , col("id")))

还需要导入语句:

import org.apache.spark.sql.expressions.UserDefinedFunction