Scala Spark udf java.lang.UnsupportedOperationException

时间:2018-06-12 15:36:30

标签: scala apache-spark

我创建了这个currying函数来检查udf中endDateStr的空值,代码如下:( col x的类型是ArrayType [TimestampType]):

    def _getCountAll(dates: Seq[Timestamp]) = Option(dates).map(_.length)
    def _getCountFiltered(endDate: Timestamp)(dates: Seq[Timestamp]) = Option(dates).map(_.count(!_.after(endDate)))

    val getCountUDF = udf((endDateStr: Option[String]) => {
      endDateStr match {
        case None => _getCountAll _
        case Some(value) => _getCountFiltered(Timestamp.valueOf(value + " 23:59:59")) _
      }
    })
    df.withColumn("distinct_dx_count", getCountUDF(lit("2009-09-10"))(col("x")))

但是我在执行时遇到了这个异常:

  

java.lang.UnsupportedOperationException:类型的架构   Seq [java.sql.Timestamp] =>不支持选项[Int]

有人可以帮我解决我的错误吗?

1 个答案:

答案 0 :(得分:1)

你不能像这样讨价值udf。如果你想要像咖喱一样的行为,你应该从外部函数返回udf

def getCountUDF(endDateStr: Option[String]) = udf {
  endDateStr match {
    case None => _getCountAll _
    case Some(value) => 
      _getCountFiltered(Timestamp.valueOf(value + " 23:59:59")) _
  }
}

df.withColumn("distinct_dx_count", getCountUDF(Some("2009-09-10"))(col("x")))

否则只是放弃currying并同时提供两个参数:

val getCountUDF = udf((endDateStr: String, dates: Seq[Timestamp]) => 
  endDateStr match {
    case null => _getCountAll(dates)
    case _ => 
      _getCountFiltered(Timestamp.valueOf(endDateStr + " 23:59:59"))(dates)
  }
)

df.withColumn("distinct_dx_count", getCountUDF(lit("2009-09-10"), col("x")))