如何获取列中最频繁的非空值?

时间:2018-09-07 09:25:37

标签: scala apache-spark apache-spark-sql

我有以下DataFrame df

+-------------------+--------+--------------------+
|   id|         name|    type|                 url|
+-------------------+--------+--------------------+
|    1|      NT Note|    aaaa|                null|
|    1|      NT Note|    aaaa|http://www.teleab...|
|    1|      NT Note|    aaaa|http://www.teleab...|
|    1|      NT Note|    aaaa|                null|
|    1|      NT Note|    aaaa|                null|
|    2|          ABC|    bbbb|                null|
|    2|          ABC|    bbbb|                null|
|    2|          ABC|    bbbb|                null|
|    2|          ABC|    bbbb|                null|
+-------------------+--------+--------------------+

我正在为每个节点分配最频繁的urltype值:

def windowSpec = Window.partitionBy("id", "url", "type") 
val result = df.withColumn("count", count("url").over(windowSpec))  
  .orderBy($"count".desc)                                                                                 
  .groupBy("id")                                                                                     
  .agg(
  first("url").as("URL"),
  first("type").as("Typel")
)

但是实际上,我需要确定最频繁出现的非null的优先级 url,以获得以下结果:

+-------------------+--------+--------------------+
|   id|         name|    type|                 url|
+-------------------+--------+--------------------+
|    1|      NT Note|    aaaa|http://www.teleab...|
|    2|          ABC|    bbbb|                null|
+-------------------+--------+--------------------+

现在我得到如下所示的输出,因为记录ID为null的{​​{1}}更为常见:

1

1 个答案:

答案 0 :(得分:1)

您可以使用udf进行此操作

import org.apache.spark.sql.functions._
import scala.collection.mutable.WrappedArray

//function to return most frequent url

def mfnURL(arr: WrappedArray[String]): String = {
        val filterArr = arr.filterNot(_ == null)
        if (filterArr.length == 0)
            return null
        else {
            filterArr.groupBy(identity).maxBy(_._2.size)._1
        }
    }

//registering udf mfnURL

val mfnURLUDF = udf(mfnURL _)

//applying groupby , agg and udf

df.groupBy("id", "name", "type").agg(mfnURLUDF(collect_list("url")).alias("url")).show

//Sample output

+---+-------+----+--------------------+
| id|   name|type|                 url|
+---+-------+----+--------------------+
|  2|    ABC|bbbb|                null|
|  1|NT Note|aaaa|http://www.teleab...|
+---+-------+----+--------------------+