基于RDD的键求和

时间:2018-09-18 05:41:43

标签: scala apache-spark reduce

我有从2001年到现在的犯罪数据集。我想计算每年发生no_of_crimes次。我尝试过的代码是

val inp = SparkConfig.spark.sparkContext.textFile("file:\\C:\\Users\\M1047320\\Desktop\\Crimes_-_2001_to_present.csv")
val header = inp.first()
val data   = inp.filter( line => line(0) != header(0))

val splitRDD = data.map( line =>{
  val temp = line.split(",(?![^\\(\\[]*[\\]\\)])")
  (temp(0),temp(1),temp(2),temp(3),temp(4),temp(5),
  temp(6),temp(7),temp(8),temp(9),temp(10),temp(11),
  temp(12),temp(13),temp(14),temp(15),temp(16),temp(17))
})

val crimesPerYear = splitRDD.map( line => (line._18,1)).reduceByKey(_+_)// line._18 represents year column
crimesPerYear.take(20).foreach(println)

预期结果是

(2001,54)
(2002,100)
(2003,24) so on

但是我得到的结果是

 (1175860,1)
 (1176964,4)
 (1178665,123)
 (1171273,3)
 (1938926,1)
 (1141621,8)
 (1136278,2)

我完全弄错了我在做什么。为什么几年来总结?请帮助我

0 个答案:

没有答案