如何在scala spark rdd中提取CSV文件列

时间:2016-03-29 15:55:31

标签: scala apache-spark

假设这些是我的CSV文件:

21628000000;21650466094
21697098269;21653506459
21653000000;21624124815
21624124815;21650466094
21650466094;21650466094
21624124815;21697098269
21697098269;21628206459
21628000000;21624124815
21650466094;21628206459
21628000000;21628206459

我想计算第一列中出现次数的结果:

(21628000000,4)
(21697098269,2)
(21624124815,2)
(21650466094,2)

我试过了:

object CountOcc {
  def main(args: Array[String]) {
    val conf = new SparkConf()
    .setAppName("Word Count")
    .setMaster("local")

    val sc = new SparkContext(conf)

    //loading text file into textFile object .(RDD)
    val textFile = sc.textFile(args(0))

   //read the line , split the line into words
    val words = textFile.flatMap (line => line.split(";"))
    val cols = words.map(_.trim)
    println(s"${cols(0)}") //error
    cols.foreach(println)
   sc.stop()

  }
}

我收到错误 org.apache.spark.rdd.RDD错误[String]不接受参数

所以我不能创建cols(0)或cols(1),我怎么只有第一列才能计算出现?

3 个答案:

答案 0 :(得分:3)

尝试

val words = textFile.map (line => line.split(";")(0)).map(p=>(p,1)).reduceByKey(_+_).collect()

答案 1 :(得分:0)

我试试

val words = textFile.flatMap (line => line.split(";")(1))

我明白了:

2
1
6
5
0
4
6
6
0
9
4
2
1
6
5
3
5
0
6
4
5
9
2
1
6.....

答案 2 :(得分:0)

此scala作业将正常打印CSV文件的第一列。

import org.apache.spark.sql.SparkSession

object CountOcc {
  def main(args: Array[String]) {
    val spark = SparkSession.builder()
      .appName("Read CSV")
      .getOrCreate()

    val csvDF = spark.read.csv(args(0))

    val firstColumnList = csvDF.map( x => x.getString(0))

    firstColumnList.foreach(println(_))

    spark.close
  }
}

希望有所帮助