如何以rdd格式对字符串列求和?

时间:2016-11-15 00:16:30

标签: scala apache-spark rdd

我是spark的新手。我已经加载了sc.textFile的csv文件。我想使用reduceByKey对字符串类型但包含数字的列求和。 当我尝试这样的事情reduceByKey(_ + _)时,它只是将数字放在彼此旁边。 我该怎么办?我应该转换列吗?

1 个答案:

答案 0 :(得分:3)

您需要解析字符串,例如:

scala> val rdd = sc.parallelize(Seq(("a", "1"), ("a", "2.7128"), ("b", "3.14"),
       ("b", "4"), ("b", "POTATO")))
rdd: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[57] at parallelize at <console>:27

scala> def parseDouble(s: String) = try { Some(s.toDouble) } catch { case _ => None }
parseDouble: (s: String)Option[Double]

scala> val reduced = rdd.flatMapValues(parseDouble).reduceByKey(_+_)
reduced: org.apache.spark.rdd.RDD[(String, Double)] = ShuffledRDD[59] at reduceByKey at <console>:31

scala> reduced.collect.foreach{println}
(a,3.7128)
(b,7.140000000000001)