使用非列参数的Spark udf

时间:2016-12-20 12:58:58

标签: scala apache-spark apache-spark-sql user-defined-functions udf

我想将一个变量而不是列传递给spark中的UDF。

地图的格式如下 Spark dataframe to nested map

val joinUDF = udf((replacementLookup: Map[String, Double], newValue: String) => {
    replacementLookup.get(newValue) match {
      case Some(tt) => tt
      case None => 0.0
    }
  })

应该映射为

(columnsMap).foldLeft(df) {
    (currentDF, colName) =>
      {
        println(colName._1)
        println(colName._2)
        currentDF
          .withColumn("myColumn_" + colName._1, joinUDF(colName._2, col(colName._1)))
      }
  }

但抛出

type mismatch;
[error]  found   : Map
[error]  required: org.apache.spark.sql.Column
[error]           .withColumn("myColumn_" + colName._1, joinUDF(colName._2, col(colName._1)))

2 个答案:

答案 0 :(得分:3)

如果要将文字传递给UDF,请使用org.apache.spark.sql.functions.lit

即。使用joinUDF(lit(colName._2), col(colName._1))

但不支持地图,因此您必须重写代码,例如通过在创建udf之前应用Map-argument

val joinFunction = (replacementLookup: Map[String, Double], newValue: String) => {
   replacementLookup.get(newValue) match {
     case Some(tt) => tt
     case None => 0.0
  }
}

 (columnsMap).foldLeft(df) {
   (currentDF, colName) =>
   {
     val joinUDF = udf(joinFunction(colName._2, _:String))
     currentDF
       .withColumn("myColumn_" + colName._1, joinUDF(col(colName._1)))
   }
 }

答案 1 :(得分:3)

你可以使用currying:

import org.apache.spark.sql.functions._
val df = Seq(("a", 1), ("b", 2)).toDF("StringColumn", "IntColumn")

def joinUDF(replacementLookup: Map[String, Double]) = udf((newValue: String) => {
  replacementLookup.get(newValue) match {
    case Some(tt) => tt
    case None => 0.0
  }
})

val myMap = Map("a" -> 1.5, "b" -> 3.0)

df.select(joinUDF(myMap)($"StringColumn")).show()

此外,您可以尝试使用广播变量:

import org.apache.spark.sql.functions._
val df = Seq(("a", 1), ("b", 2)).toDF("StringColumn", "IntColumn")

val myMap = Map("a" -> 1.5, "b" -> 3.0)
val broadcastedMap = sc.broadcast(myMap)

def joinUDF = udf((newValue: String) => {
  broadcastedMap.value.get(newValue) match {
    case Some(tt) => tt
    case None => 0.0
  }
})

df.select(joinUDF($"StringColumn")).show()