AggregateByKey方法在Spark RDD中不起作用

时间:2018-10-06 15:19:13

标签: scala apache-spark apache-spark-sql rdd

以下是我的示例数据:

1,Siddhesh,43,32000
1,Siddhesh,12,4300
2,Devil,10,1000
2,Devil,10,3000
2,Devil,11,2000

我创建了一对RDD以执行combineByKeyaggregateByKey操作。下面是我的代码:

val rd=sc.textFile("file:///home/cloudera/Desktop/details.txt").map(line=>line.split(",")).map(p=>((p(0).toString,p(1).toString),(p(3).toLong,p(2).toString.toInt)))  

上面,我将前两列的数据配对为键,将其余两列的数据配对为值。现在,我只希望数据集中第3列的右元组有不同的值,而我可以使用CombineByKey进行处理。下面是我的代码:

val reduced = rd.combineByKey(
      (x:(Long,Int))=>{(x._1,Set(x._2))},
      (x:(Long,Set[Int]),y:(Long,Int))=>(x._1+y._1,x._2+y._2),
      (x:(Long,Set[Int]),y:(Long,Set[Int]))=>{(x._1+y._1,x._2++y._2)}
      )  
scala> reduced.foreach(println)
((1,Siddhesh),(36300,Set(43, 12)))
((2,Devil),(6000,Set(10, 11)))

现在,我对其进行映射,以便可以获得唯一的不同键的值之和。

scala> val newRdd=reduced.map(p=>(p._1._1,p._1._2,p._2._1,p._2._2.size))

scala> newRdd.foreach(println)
(1,Siddhesh,36300,2)
(2,Devil,6000,2)

在这里,对于devil,最后一个值是2,因为我在数据集中有10个作为“ Devil”记录的2个值,并且因为我已经使用了Set,所以它消除了重复项。因此,现在我用aggregateByKey进行了尝试。以下是我的错误代码:

val rd=sc.textFile("file:///home/cloudera/Desktop/details.txt").map(line=>line.split(",")).map(p=>((p(0).toString,p(1).toString),(p(3).toString.toInt,p(2).toString.toInt)))    

我将value列从long转换为int,因为在初始化时它会在'0'上引发错误

scala> val reducedByAggKey = rd.aggregateByKey((0,0))(
     |        (x:(Int,Set[Int]),y:(Int,Int))=>(x._1+y._1,x._2+y._2),
     |       (x:(Int,Set[Int]),y:(Int,Set[Int]))=>{(x._1+y._1,x._2++y._2)}
     | )
<console>:36: error: type mismatch;
 found   : scala.collection.immutable.Set[Int]
 required: Int
              (x:(Int,Set[Int]),y:(Int,Int))=>(x._1+y._1,x._2+y._2),
                                                             ^
<console>:37: error: type mismatch;
 found   : scala.collection.immutable.Set[Int]
 required: Int
             (x:(Int,Set[Int]),y:(Int,Set[Int]))=>{(x._1+y._1,x._2++y._2)}
                                                                  ^  

并且正如Leo所建议的那样,以下是我的错误代码:

    scala> val reduced = rdd.aggregateByKey((0, Set.empty[Int]))(
     |   (x: (Int, Set[Int]), y: (Int, Int)) => (x._1 + y._1, y._2+x._2),
     |   (x: (Int, Set[Int]), y: (Int, Set[Int])) => (x._1 + y._1, y._2++ x._2)
     | )
<console>:36: error: overloaded method value + with alternatives:
  (x: Double)Double <and>
  (x: Float)Float <and>
  (x: Long)Long <and>
  (x: Int)Int <and>
  (x: Char)Int <and>
  (x: Short)Int <and>
  (x: Byte)Int <and>
  (x: String)String
 cannot be applied to (Set[Int])
         (x: (Int, Set[Int]), y: (Int, Int)) => (x._1 + y._1, y._2+x._2),
                                                                  ^

那我在哪里弄糟呢?请纠正我

1 个答案:

答案 0 :(得分:1)

如果我正确理解了您的要求,则要获得全部计数而不是唯一计数,请使用List而不是Set进行汇总。关于您的aggregateByKey的问题,这是由于zeroValue的类型不正确,该类型应为(0, List.empty[Int])(如果您坚持使用,应该是(0, Set.empty[Int]) Set):

val reduced = rdd.aggregateByKey((0, List.empty[Int]))(
  (x: (Int, List[Int]), y: (Int, Int)) => (x._1 + y._1, y._2 :: x._2),
  (x: (Int, List[Int]), y: (Int, List[Int])) => (x._1 + y._1, y._2 ::: x._2)
)

reduced.collect
// res1: Array[((String, String), (Int, List[Int]))] =
//   Array(((2,Devil),(6000,List(11, 10, 10))), ((1,Siddhesh),(36300,List(12, 43))))

val newRdd = reduced.map(p => (p._1._1, p._1._2, p._2._1, p._2._2.size))

newRdd.collect
// res2: Array[(String, String, Int, Int)] =
//   Array((2,Devil,6000,3), (1,Siddhesh,36300,2))

请注意,如果您想要全部计数而不是唯一计数,那么从SetList的更改也将应用于您的combineByKey代码。

[更新]

要获得每个评论的独特计数,只需将Set设置为zeroValue并留在(0, Set.empty[Int])

val reduced = rdd.aggregateByKey((0, Set.empty[Int]))(
  (x: (Int, Set[Int]), y: (Int, Int)) => (x._1 + y._1, x._2 + y._2),
  (x: (Int, Set[Int]), y: (Int, Set[Int])) => (x._1 + y._1, x._2 ++ y._2)
)

reduced.collect
// res3: Array[((String, String), (Int, scala.collection.immutable.Set[Int]))] =
//   Array(((2,Devil),(6000,Set(10, 11))), ((1,Siddhesh),(36300,Set(43, 12))))