在用户聚合数据帧组时,无法执行用户定义的函数

时间:2018-05-17 15:06:09

标签: scala apache-spark dataframe apache-spark-sql user-defined-functions

我有一个如下数据框,我试图获取用户组名称的最大值(总和)。

+-----+-----------------------------+
|name |nt_set                       |
+-----+-----------------------------+
|Bob  |[av:27.0, bcd:29.0, abc:25.0]|
|Alice|[abc:95.0, bcd:55.0]         |
|Bob  |[abc:95.0, bcd:70.0]         |
|Alice|[abc:125.0, bcd:90.0]        |
+-----+-----------------------------+

以下是我用来获取用户的最大值(和)的udf

val maxfunc = udf((arr: Array[String]) => {
val step1 = arr.map(x => (x.split(":", -1)(0), x.split(":", -1)(1))).groupBy(_._1).mapValues(arr => arr.map(_._2.toInt).sum).maxBy(_._2)
val result = step1._1 + ":" + step1._2
result})

当我运行udf时,它抛出以下错误

 val c6 = c5.withColumn("max_nt", maxfunc(col("nt_set"))).show(false)
  

错误:无法执行用户定义的函数($ anonfun $ 1:(array)=> string)

如何以更好的方式实现这一点,因为我需要在更大的数据集中执行此操作

预期结果是

expected result:
+-----+-----------------------------+
|name |max_nt                       |
+-----+-----------------------------+
|Bob  |abc:120.0                    |
|Alice|abc:220.0                    |
+-----+-----------------------------+

2 个答案:

答案 0 :(得分:1)

根据我对你要做的事情的理解,你的榜样是错误的。 Alice的bcd字段总和为145,而她的abc字段总和为220.因此也应该为她选择abc。如果我错了,那我就误解了你的问题。

无论如何,你不需要一个udf去做你想做的事。让我们生成您的数据:

val df = sc.parallelize(Seq(
    ("Bob", Array("av:27.0", "bcd:29.0", "abc:25.0")), 
    ("Alice", Array("abc:95.0", "bcd:55.0")), 
    ("Bob", Array("abc:95.0", "bcd:70.0")), 
    ("Alice", Array("abc:125.0", "bcd:90.0"))) )
        .toDF("name", "nt_set")

然后,一种方法是将nt_set分解为只包含一个字符串/值对的列nt。

df.withColumn("nt", explode('nt_set))
  //then we split the string and the value
  .withColumn("nt_string", split('nt, ":")(0))
  .withColumn("nt_value", split('nt, ":")(1).cast("int"))
  //then we sum the values by name and "string"
  .groupBy("name", "nt_string")
  .agg(sum('nt_value) as "nt_value")
  /* then we build a struct with the value first to be able to select
     the nt field with max value while keeping the corresponding string */
  .withColumn("nt", struct('nt_value, 'nt_string))
  .groupBy("name")
  .agg(max('nt) as "nt")
  // And we rebuild the "nt" column.
  .withColumn("max_nt", concat_ws(":", $"nt.nt_string", $"nt.nt_value"))
  .drop("nt").show(false)

+-----+-------+
|name |max_nt |
+-----+-------+
|Bob  |abc:120|
|Alice|abc:220|
+-----+-------+

答案 1 :(得分:1)

maxfunc的核心逻辑正常工作,但它应该处理post-groupBy数组列,这是一个嵌套的Seq集合:

val df = Seq(
  ("Bob", Seq("av:27.0", "bcd:29.0", "abc:25.0")),
  ("Alice", Seq("abc:95.0", "bcd:55.0")),
  ("Zack", Seq()),
  ("Bob", Seq("abc:50.0", null)),
  ("Bob", Seq("abc:95.0", "bcd:70.0")),
  ("Alice", Seq("abc:125.0", "bcd:90.0"))
).toDF("name", "nt_set")

import org.apache.spark.sql.functions._

val maxfunc = udf( (ss: Seq[Seq[String]]) => {
  val groupedSeq: Map[String, Double] = ss.flatMap(identity).
    collect{ case x if x != null => (x.split(":")(0), x.split(":")(1)) }.
    groupBy(_._1).mapValues(_.map(_._2.toDouble).sum)

  groupedSeq match {
    case x if x == Map.empty[String, Double] => ("", -999.0)
    case _ => groupedSeq.maxBy(_._2)
  }
} )

df.groupBy("name").agg(collect_list("nt_set").as("arr_nt")).
  withColumn("max_nt", maxfunc($"arr_nt")).
  select($"name", $"max_nt._1".as("max_key"), $"max_nt._2".as("max_val")).
  show
// +-----+-------+-------+
// | name|max_key|max_val|
// +-----+-------+-------+
// | Zack|       | -999.0|
// |  Bob|    abc|  170.0|
// |Alice|    abc|  220.0|
// +-----+-------+-------+