我是Stack overflow和Spark.Basically做RDD转换的新手。
我的输入数据:
278222631,2763985,10.02.12,01.01.53,Whatsup,NA,Email,Halter,wagen,28.06.12,313657794,VW,er,i,B,0,23.11.11,234
298106482,2780663,22.02.12,22.02.12,Whatsup,NA,WWW,Halter,wagen,26.06.12,284788860,VW,er,i,B,0,02.06.04,123
我的RDD格式
val dateCov: RDD[(Long, Long, String, String, String, String, String, String, String, String, Long, String, String, String, String, String, String, Long)]
做一些reduceBykey
转换映射([(k,k),(v)]在col(1,17)上作为键,col(18)作为Value。并在reduceByKey上应用一些函数
示例:
val reducedSortedRDD = dateCov.map(r => { ((r._1, r._11) -> (r._18)) })
.reduceByKey((x, y) => ((math.min(x, y)))) // find minimum diff
.map(r => (r._1._1, r._1._2, r._2))
.sortBy(_._1, true)
reduceByKey
函数之后是否可以获取所有其他列,即我的reducedSortedRDD返回类型应为reducedSortedRDD :
RDD[(Long, Long, String, String, String, String, String, String, String, String, Long, String, String, String, String, String, String, Long)]
而不是reducedSortedRDD: RDD[(Long, Long, Long)]
,就像这种情况一样。
reduceByKey
转换我正在使用spark 1.4
答案 0 :(得分:2)
据我所知,您需要在reduceByKey
函数中添加所有列(请记住洗去额外数据的开销),或者您也可以加入reducedSortedRDD
您的原始数据。
要将所有列放在一起,您可以执行以下操作:
val reducedSortedRDD = dateCov
.map(r => ((r._1, r._11),(r._18, r._2, r._3, ..., r._17)))
.reduceByKey((value1,value2) => if (value1._1 < value2._1) value1 else value2)
.map{case(key, value) => (key._1, key._2, value._2, value._3, ..., value._17, value._1)}
.sortBy(_._1, true)
要加入,它看起来像这样:
val keyValuedDateCov = dateCov
.map(r => ((r._1, r._11, r._18), (r._1, r._2,r._3, ...., r._17)))
val reducedRDD = dateCov
.map(r => ((r._1, r._11), r._18))
.reduceByKey((x, y) => math.min(x, y)) // find minimum diff
.map{case(key, value) => ((key._1, key._2, value), AnyRef)}
val reducedSortedRDD = reducedRDD
.join(keyValuedDateCov)
.map{case(key, (_, original)) => (key._1, key._2, original._1, original._2, original._3, ..., original._17, key._3)}
.sortBy(_._1, true)
连接版本的缺点在于,如果原始数据中的多行在第1列,第17列和第18列中具有完全相同的值,则最终结果还将包含具有这些值的多行,因此未正确减少。如果保证数据在这些列中没有多个具有相同值的行,则应该没有问题。