通过Map Spark Scala循环

时间:2017-12-21 13:58:28

标签: scala csv apache-spark twitter dataset

在此代码中,我们有两个文件:包含名称的athlete.csv和包含推文消息的twitter.test。我们想在twitter.test中找到与运动员名称相匹配的每一行的名称。我们应用map函数来存储运动员名称,并希望将所有名称迭代到测试中的所有行文件。

object twitterAthlete {

  def loadAthleteNames() : Map[String, String] = {

    // Handle character encoding issues:
    implicit val codec = Codec("UTF-8")
    codec.onMalformedInput(CodingErrorAction.REPLACE)
    codec.onUnmappableCharacter(CodingErrorAction.REPLACE)

    // Create a Map of Ints to Strings, and populate it from u.item.
    var athleteInfo:Map[String, String] = Map()
    //var movieNames:Map[Int, String] = Map() 
     val lines = Source.fromFile("../athletes.csv").getLines()
     for (line <- lines) {
       var fields = line.split(',')
       if (fields.length > 1) {
        athleteInfo += (fields(1) -> fields(7))
       }
     }

     return athleteInfo
  }

  def parseLine(line:String): (String)= {
    var athleteInfo = loadAthleteNames()
    var hello = new String
    for((k,v) <- athleteInfo){
      if(line.toString().contains(k)){
        hello = k
      }
    }
    return (hello)
  }


  def main(args: Array[String]){
    Logger.getLogger("org").setLevel(Level.ERROR)

    val sc = new SparkContext("local[*]", "twitterAthlete")

    val lines = sc.textFile("../twitter.test")
    var athleteInfo = loadAthleteNames()

    val splitting = lines.map(x => x.split(";")).map(x => if(x.length == 4 && x(2).length <= 140)x(2)) 

    var hello = new String()
    val container = splitting.map(x => for((key,value) <- athleteInfo)if(x.toString().contains(key)){key}).cache


    container.collect().foreach(println)  

   // val mapping = container.map(x => (x,1)).reduceByKey(_+_)
    //mapping.collect().foreach(println)
  }
}

第一个文件看起来像:

id,name,nationality,sex,height........  
001,Michael,USA,male,1.96 ...
002,Json,GBR,male,1.76 ....
003,Martin,female,1.73 . ...

第二个文件看起来像:

time, id , tweet .....
12:00, 03043, some message that contain some athletes names  , .....
02:00, 03023, some message that contain some athletes names , .....
有些人认为这样......

但是在运行此代码后我得到了空结果,非常感谢任何建议

我得到的结果是空的:

()....
()...
()...

但是我期望的结果如下:

(name,1)
(other name,1)

2 个答案:

答案 0 :(得分:1)

我认为你应该先从最简单的选项开始......

我会使用DataFrames,因此您可以使用内置的CSV解析并利用Catalyst,Tungsten等。

然后你可以使用内置的Tokenizer将推文分成单词,爆炸,并进行简单的连接。根据运动员姓名数据的大小,您最终会得到更优化的广播联接并避免随机播放。

import org.apache.spark.sql.functions._
import org.apache.spark.ml.feature.Tokenizer

val tweets = spark.read.format("csv").load(...)
val athletes = spark.read.format("csv").load(...)

val tokenizer = new Tokenizer()
tokenizer.setInputCol("tweet")
tokenizer.setOutputCol("words")

val tokenized = tokenizer.transform(tweets)

val exploded = tokenized.withColumn("word", explode('words))

val withAthlete = exploded.join(athletes, 'word === 'name)

withAthlete.select(exploded("id"), 'name).show()

答案 1 :(得分:1)

您需要使用yield将值返回到map

 val container = splitting.map(x => for((key,value) <- athleteInfo ; if(x.toString().contains(key)) ) yield (key, 1)).cache