将案例类列表减少为案例类的计数

时间:2016-10-11 14:53:28

标签: scala apache-spark

我目前有一个((id, code), (list of events with keys id and code))形式的组RDD。如下所示,ID为000406106-01,代码为496,各个事件均为Diagnostic个案类。我希望做的是获得((id, code), count of events)形式的RDD。基本上,我想将CompactBuffer事件的Diagnostic对象折叠成一系列事件。有什么建议吗?

    ID         CODE               EVENT1                                                     EVENT2
((000406106-01,496),CompactBuffer(Diagnostic(000406106-01,Sun Apr 16 02:24:00 UTC 2006,496), Diagnostic(000406106-01,Fri Jul 20 15:30:00 UTC 2012,496), Diagnostic(000406106-01,Tue Dec 23 17:00:00 UTC 2014,496), Diagnostic(000406106-01,Wed Jan 06 20:45:00 UTC 2010,496), Diagnostic(000406106-01,Fri Mar 04 16:30:00 UTC 2011,496), Diagnostic(000406106-01,Sun Aug 04 04:51:00 UTC 2013,496), Diagnostic(000406106-01,Fri Mar 11 16:00:00 UTC 2011,496), Diagnostic(000406106-01,Tue Jul 10 13:45:00 UTC 2012,496), Diagnostic(000406106-01,Wed Jun 15 20:00:00 UTC 2005,496), Diagnostic(000406106-01,Tue Dec 29 13:30:00 UTC 2009,496), Diagnostic(000406106-01,Fri Jul 13 13:30:00 UTC 2012,496), Diagnostic(000406106-01,Thu Jul 26 03:40:00 UTC 2007,496), Diagnostic(000406106-01,Mon Jun 13 14:45:00 UTC 2005,496), Diagnostic(000406106-01,Wed Dec 24 18:00:00 UTC 2014,496), Diagnostic(000406106-01,Thu Mar 03 15:45:00 UTC 2011,496), Diagnostic(000406106-01,Wed Dec 31 15:00:00 UTC 2014,496), Diagnostic(000406106-01,Sat Jul 26 04:39:00 UTC 2008,496), Diagnostic(000406106-01,Thu Dec 31 20:30:00 UTC 2009,496)))

我在寻找:

     ID        CODE COUNT
((000406106-01,496), 20)

编辑:为了清楚起见,这里是如何生成上面的RDD:

val grpDiag = diagnostic.groupBy(diag => (diag.id, diag.code))

诊断是上述数据的未分组RDD。

1 个答案:

答案 0 :(得分:2)

如果元组的第二个元素是CompactBuffer,并且您需要的是它的长度mapValues _.size应该为您提供所需的结果:

rdd.mapValues(_.size)

一般情况下,您应该避免分组只是为了找到count并使用reduceByKey作为替代:

val diagnostics: RDD[Diagnostic] = ???
diagnostics.map(d => ((d.id, d.code), 1L)).reduceByKey(_ + _)