Spark DataFrame / Dataset查找每个密钥效率方式的最常见值

时间:2017-11-14 16:26:48

标签: scala apache-spark apache-spark-sql apache-spark-dataset

问题: 我有一个问题是在spark(使用scala)中映射键的最常见值。我用RDD完成了它,但不知道如何有效地使用DF / DS(sparksql)

数据集就像

key1 = value_a
key1 = value_b
key1 = value_b
key2 = value_a
key2 = value_c
key2 = value_c
key3 = value_a

火花转换后,访问输出应该是每个键的共同值

输出

key1 = valueb
key2 = valuec
key3 = valuea

一直试过:

RDD

我已经尝试在RDD中按(key,value),count组进行映射和缩减,它会生成逻辑,但我无法将其转换为sparksql(DataFrame / Dataset)(因为我希望跨网络进行最小的随机播放)

这是我的RDD代码

 val data = List(

"key1,value_a",
"key1,value_b",
"key1,value_b",
"key2,value_a",
"key2,value_c",
"key2,value_c",
"key3,value_a"

)

val sparkConf = new SparkConf().setMaster("local").setAppName("example")
val sc = new SparkContext(sparkConf)

val lineRDD = sc.parallelize(data)

val pairedRDD = lineRDD.map { line =>
val fields = line.split(",")
(fields(0), fields(2))
}

val flatPairsRDD = pairedRDD.flatMap {
  (key, val) => ((key, val), 1)
}

val SumRDD = flatPairsRDD.reduceByKey((a, b) => a + b)




val resultsRDD = SumRDD.map{
  case ((key, val), count) => (key, (val,count))
 }.groupByKey.map{
  case (key, valList) => (name, valList.toList.sortBy(_._2).reverse.head)
}

resultsRDD.collect().foreach(println)

DataFrame,使用窗口:我正在尝试使用Window.partitionBy("key", "value")来聚合count over the window。分别是sortingagg()

2 个答案:

答案 0 :(得分:2)

根据我在你的问题中所理解的,你可以做什么

首先,您必须阅读数据并将其转换为dataframe

val df = sc.textFile("path to the data file")   //reading file line by line
  .map(line => line.split("="))                 // splitting each line by =
  .map(array => (array(0).trim, array(1).trim)) //tuple2(key, value) created
  .toDF("key", "value")                        //rdd converted to dataframe which required import sqlContext.implicits._

将是

+----+-------+
|key |value  |
+----+-------+
|key1|value_a|
|key1|value_b|
|key1|value_b|
|key2|value_a|
|key2|value_c|
|key2|value_c|
|key3|value_a|
+----+-------+

下一步是计算每个键的相同值的重复次数,并选择每个键重复次数最多的值,可以使用Window函数完成,aggregations如下所示< / p>

import org.apache.spark.sql.expressions._                   //import Window library
def windowSpec = Window.partitionBy("key", "value")         //defining a window frame for the aggregation
import org.apache.spark.sql.functions._                     //importing inbuilt functions
df.withColumn("count", count("value").over(windowSpec))     // counting repeatition of value for each group of key, value and assigning that value to new column called as count
  .orderBy($"count".desc)                                   // order dataframe with count in descending order
  .groupBy("key")                                           // group by key
  .agg(first("value").as("value"))                          //taking the first row of each key with count column as the highest

因此最终输出应该等于

+----+-------+
|key |value  |
+----+-------+
|key3|value_a|
|key1|value_b|
|key2|value_c|
+----+-------+ 

答案 1 :(得分:0)

使用 groupBy 怎么样?

val maxFreq= udf((values: List[Int]) => {
  values.groupBy(identity).mapValues(_.size).maxBy(_._2)._1
})

df.groupBy("key")
  .agg(collect_list("value") as "valueList")
  .withColumn("mostFrequentValue", maxFreq(col("valueList")))