如何将Array [Seq [String]]传递给apache spark udf? (错误:不适用)

时间:2016-05-19 18:06:03

标签: scala apache-spark

我在scala中有以下apache spark udf:

val myFunc = udf {
  (userBias: Float, otherBiases: Map[Long, Float],
    userFactors: Seq[Float], context: Seq[String]) => 
    var result = Float.NaN

    if (userFactors != null) {
      var contexBias = 0f

      for (cc <- context) {
       contexBias += otherBiases(contextMapping(cc))
      }

      // definition of result
      // ...
    }
    result
}

现在我想将参数传递给此函数,但由于参数context,我总是收到消息Not Applicable。我知道用户定义的函数按行输入,如果我删除context,则运行此函数...如何解决此问题?为什么不从Array[Seq[String]]读取行,即从context读取行?或者,将context作为DataFrame或类似内容传递是可以接受的。

// context is Array[Seq[String]]
val a = sc.parallelize(Seq((1,2),(3,4))).toDF("a", "b")
val context = a.collect.map(_.toSeq.map(_.toString))

// userBias("bias"), otherBias("biases") and userFactors("features")
// have a type Column, while userBias... are DataFrames
myDataframe.select(dataset("*"),
                   myFunc(userBias("bias"),
                          otherBias("biases"),
                          userFactors("features"),
                          context)
                   .as($(newCol)))

更新

我尝试了zero323答案中指出的解决方案,但context: Array[Seq[String]]仍存在一个小问题。特别是问题是循环遍历此数组for (cc <- context) { contexBias += otherBiases(contextMapping(cc)) }。我应该将字符串传递给contextMapping,而不是Seq[String]

  def myFunc(context: Array[Seq[String]]) = udf {
    (userBias: Float, otherBiases: Map[Long, Float],
     userFactors: Seq[Float]) =>
      var result = Float.NaN

      if (userFactors != null) {
        var contexBias = 0f
        for (cc <- context) {
          contexBias += otherBiases(contextMapping(cc))
        }

        // estimation of result

      }
      result
  }

现在我将其称为:

myDataframe.select(dataset("*"),
                   myFunc(context)(userBias("bias"),
                                   otherBias("biases"),
                                   userFactors("features"))
           .as($(newCol)))

1 个答案:

答案 0 :(得分:1)

Spark 2.2 +

您可以使用typedLit个功能:

import org.apache.spark.sql.functions.typedLit

myFunc(..., typedLit(context))

Spark&lt; 2.2

任何直接传递给UDF的参数必须是Column,所以如果你想传递常量数组,你必须将它转换为列文字:

import org.apache.spark.sql.functions.{array, lit}

val myFunc: org.apache.spark.sql.UserDefinedFunction = ???

myFunc(
  userBias("bias"),
  otherBias("biases"),
  userFactors("features"),
  // org.apache.spark.sql.Column
  array(context.map(xs => array(xs.map(lit _): _*)): _*)  
)

Column个对象只能使用闭包间接传递,例如:

def myFunc(context: Array[Seq[String]]) = udf {
  (userBias: Float, otherBiases: Map[Long, Float],  userFactors: Seq[Float]) => 
    ???
}

myFunc(context)(userBias("bias"), otherBias("biases"), userFactors("features"))