如何编写一个简单的行集合的Spark UDAF?

时间:2017-02-21 01:15:53

标签: scala apache-spark apache-spark-sql

根据我的具体要求,我想编写一个UDAF,它只收集所有输入行。

输入是两列行,Double Type;

中级架构,"我想",是ArrayList(如果我错了就纠正我)

返回的数据类型是ArrayList

我写了一个"想法"我的UDAF,但我希望有人帮我完成它。

class CollectorUDAF() extends UserDefinedAggregateFunction {

  // Input Data Type Schema
  def inputSchema: StructType = StructType(Array(StructField("value", DoubleType), StructField("y", DoubleType)))

  // Intermediate Schema
  def bufferSchema = util.ArrayList[Array(StructField("value", DoubleType), StructField("y", DoubleType)]

  // Returned Data Type .
  def dataType: DataType = util.ArrayList[Array(StructField("value", DoubleType), StructField("y", DoubleType)]

  // Self-explaining
  def deterministic = true

  // This function is called whenever key changes
  def initialize(buffer: MutableAggregationBuffer) = {

  }

  // Iterate over each entry of a group
  def update(buffer: MutableAggregationBuffer, input: Row) = {


  }

  // Called after all the entries are exhausted.
  def evaluate(buffer: Row) = {

  }

  def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {

  }

}

1 个答案:

答案 0 :(得分:5)

如果我理解你的问题是正确的,那么以下是你的解决方案:

ArrayList val=new ArrayList(); //Supported at 1.4