Spark UDAF泛型类型不匹配

时间:2017-09-28 16:59:54

标签: scala apache-spark user-defined-aggregate

我试图在Spark上创建一个UDAF(2.0.1,Scala 2.11),如下所示。这基本上是聚合元组并输出Map

import org.apache.spark.sql.expressions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions.udf
import org.apache.spark.sql.{Row, Column}

class mySumToMap[K, V](keyType: DataType, valueType: DataType) extends UserDefinedAggregateFunction {
  override def inputSchema = new StructType()
    .add("a_key", keyType)
    .add("a_value", valueType)

  override def bufferSchema = new StructType()
    .add("buffer_map", MapType(keyType, valueType))

  override def dataType = MapType(keyType, valueType)

  override def deterministic = true 

  override def initialize(buffer: MutableAggregationBuffer) = {
    buffer(0) = Map[K, V]()
  }

  override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {

    // input :: 0 = a_key (k), 1 = a_value
    if ( !(input.isNullAt(0)) ) {

      val a_map = buffer(0).asInstanceOf[Map[K, V]]
      val k = input.getAs[K](0)  // get the value of position 0 of the input as string (a_key)

      // I've split these on purpose to show that return values are all of type V
      val new_v1: V = a_map.getOrElse(k, 0.asInstanceOf[V])
      val new_v2: V = input.getAs[V](1)
      val new_v: V = new_v1 + new_v2

      buffer(0) = if (new_v != 0) a_map + (k -> new_v) else a_map - k
    }
  }

  override def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
    val map1: Map[K, V] = buffer1(0).asInstanceOf[Map[K, V]]
    val map2: Map[K, V] = buffer2(0).asInstanceOf[Map[K, V]]

    buffer1(0) = map1 ++ map2.map{ case (k,v) => k -> (v + map1.getOrElse(k, 0.asInstanceOf[V])) }
  }

  override def evaluate(buffer: Row) = buffer(0).asInstanceOf[Map[K, V]]

}

但是当我编译它时,我看到以下错误:

<console>:74: error: type mismatch;
 found   : V
 required: String
             val new_v: V = new_v1 + new_v2
                                     ^
<console>:84: error: type mismatch;
 found   : V
 required: String
           buffer1(0) = map1 ++ map2.map{ case (k,v) => k -> (v + map1.getOrElse(k, 0.asInstanceOf[V])) }

我做错了什么?

编辑:对于将此标记为Spark UDAF - using generics as input type?的副本的人 - 这不是该问题的重复,因为该问题不涉及地图数据类型。关于使用Map数据类型所面临的问题,上面的代码非常具体和完整。

1 个答案:

答案 0 :(得分:2)

将类型限制为具有foreach (ListViewItem item in listView1.Items){ if (item.Selected) item.Selected = false; }

的类型
Numeric[_]

使用class mySumToMap[K, V: Numeric](keyType: DataType, valueType: DataType) extends UserDefinedAggregateFunction { ... 在运行时获取它:

Implicitly

并使用其val n = implicitly[Numeric[V]] 方法代替plus+代替zero

0

要支持更广泛的类型,您可以使用cats buffer1(0) = map1 ++ map2.map{ case (k,v) => k -> n.plus(v, map1.getOrElse(k, n.zero)) }

Monoid

并调整代码:

import cats._
import cats.implicits._

以后:

class mySumToMap[K, V: Monoid](keyType: DataType, valueType: DataType) 
  extends UserDefinedAggregateFunction {
    ...