我是Scala和Spark的新手。我在Spark Shell工作 我需要分组依据并按此文件的第一个三个字段排序,查找重复项。如果我在组中发现重复项,我需要在第三个字段附加一个计数器,从" 1&#34开始;对于重复组中的每个记录,按" 1"递增。将计数器重置回" 1"在阅读新小组时如果没有找到重复项,那么只需附加一个" 1"的计数器。
CSV文件包含以下内容:
(" 00111"" 00111651",的" 4444" 下," PY"" MA&# 34)
(" 00111"" 00111651",的" 4444" 下," XX"" MA&# 34)
(" 00112"" 00112P11",的" 5555" 下," TA"" MA&# 34;)
val csv = sc.textFile(" file.csv")
val recs = csv.map(line => line.split(",")
如果我在上面的示例中正确应用逻辑,则生成的rec的RDD将如下所示:
(" 00111"" 00111651",的" 44441" 下," PY"" MA&# 34)
(" 00111"" 00111651",的" 44442" 下," XX"" MA&# 34)
(" 00112"" 00112P11",的" 55551" 下," TA"" MA&# 34)
答案 0 :(得分:3)
如何对数据进行分组,更改数据并将其放回原处:
val csv = sc.parallelize(List(
"00111,00111651,4444,PY,MA",
"00111,00111651,4444,XX,MA",
"00112,00112P11,5555,TA,MA"
))
val recs = csv.map(_.split(","))
val grouped = recs.groupBy(line=>(line(0),line(1), line(2)))
val numbered = grouped.mapValues(dataList=>
dataList.zipWithIndex.map{case(data, idx) => data match {
case Array(fst,scd,thd,rest@_*) => Array(fst,scd,thd+(idx+1)) ++ rest
}
})
numbered.flatMap{case (key, values)=>values}
答案 1 :(得分:2)
同时对数据进行分组,更改数据,然后将数据放回去。
val lists= List(("00111","00111651","4444","PY","MA"),
("00111","00111651","4444","XX","MA"),
("00112","00112P11","5555","TA","MA"))
val grouped = lists.groupBy{case(a,b,c,d,e) => (a,b,c)}
val indexed = grouped.mapValues(
_.zipWithIndex
.map {case ((a,b,c,d,e), idx) => (a,b,c + (idx+1).toString,d,e)}
val unwrapped = indexed.flatMap(_._2)
//List((00112,00112P11,55551,TA,MA),
// (00111,00111651,44442,XX ,MA),
// (00111,00111651,44441,PY,MA))
处理数组的版本(任意长度> = 3)
val lists= List(Array("00111","00111651","4444","PY","MA"),
Array("00111","00111651","4444","XX","MA"),
Array("00112","00112P11","5555","TA","MA"))
val grouped = lists.groupBy{_.take(3)}
val indexed = grouped.mapValues(
_.zipWithIndex
.map {case (Array(a,b,c, rest@_*), idx) => Array(a,b,c+ (idx+1).toString) ++ rest})
val unwrapped = indexed.flatMap(_._2)
// List(Array(00112, 00112P11, 55551, TA, MA),
// Array(00111, 00111651, 44441, XX, MA),
// Array(00111, 00111651, 44441, PY, MA))