找到重复火花的总和

时间:2016-09-13 05:28:29

标签: scala apache-spark

输入:

Name1 Name2
arjun deshwal
nikhil choubey
anshul pandyal
arjun deshwal
arjun deshwal
deshwal arjun

scala中使用的代码

val df = sqlContext.read.format("com.databricks.spark.csv")
                   .option("header", "true")
                   .load(FILE_PATH)
val result = df.groupBy("Name1", "Name2")
               .agg(count(lit(1))
               .alias("cnt"))

获取输出:

nikhil choubey 1
anshul pandyal 1
deshwal arjun 1
arjun deshwal 3

必需输出:

nikhil choubey 1
anshul pandyal 1
deshwal arjun 4

nikhil choubey 1
anshul pandyal 1
arjun deshwal 4

1 个答案:

答案 0 :(得分:2)

我会使用一个集合来处理它,它不包含任何顺序,因此只是对集合的内容进行比较:

scala> val data = Array(
 |     ("arjun",   "deshwal"),
 |     ("nikhil",  "choubey"),
 |     ("anshul",  "pandyal"),
 |     ("arjun",   "deshwal"),
 |     ("arjun",   "deshwal"),
 |     ("deshwal", "arjun")
 | )
data: Array[(String, String)] = Array((arjun,deshwal), (nikhil,choubey), (anshul,pandyal), (arjun,deshwal), (arjun,deshwal), (deshwal,arjun))

scala> val distData = sc.parallelize(data)
distData: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[0] at parallelize at <console>:29

scala> val distDataSets = distData.map(tup => (Set(tup._1, tup._2), 1)).countByKey()
distDataSets: scala.collection.Map[scala.collection.immutable.Set[String],Long] = Map(Set(nikhil, choubey) -> 1, Set(arjun, deshwal) -> 4, Set(anshul, pandyal) -> 1)

希望这有帮助。