使用scala和Spark将RDD中的每条记录转换为Array [Map]

时间:2016-04-15 07:22:28

标签: scala apache-spark

我的RDD是\ n分隔的记录,看起来像

Single RDD

k1=v1,k2=v2,k3=v3
k1=v1,k2=v2,k3=v3
k1=v1,k2=v2,k3=v3

并希望将其转换为数组[Map [k,v]],

其中Array中的每个元素都是与记录对应的不同映射[k,v]。

数组将包含N个这样的映射,具体取决于单个RDD中的记录。

我是scala和spark的新手。转换中的任何帮助都会有所帮助。

object SparkApp  extends Logging with App {


  override def main(args: Array[ String ]): Unit = {
    val myConfigFile = new File("../sparkconsumer/conf/spark.conf")
    val fileConfig = ConfigFactory.parseFile(myConfigFile).getConfig(GlobalConstants.CONFIG_ROOT_ELEMENT)
    val propConf = ConfigFactory.load(fileConfig)
    val topicsSet = propConf.getString(GlobalConstants.KAFKA_WHITE_LIST_TOPIC).split(",").toSet
    val kafkaParams = Map[ String, String ]("metadata.broker.list" -> propConf.getString(GlobalConstants.KAFKA_BROKERS))


    //logger.info(message = "Hello World , You are entering Spark!!!")
    val conf = new SparkConf().setMaster("local[2]").setAppName(propConf.getString(GlobalConstants.JOB_NAME))
    conf.set("HADOOP_HOME", "/usr/local/hadoop")
    conf.set("hadoop.home.dir", "/usr/local/hadoop")
    //Lookup

    // logger.info("Window of 5 Seconds Enabled")
    val ssc = new StreamingContext(conf, Seconds(5))
    ssc.checkpoint("/tmp/checkpoint")

    val apiFile = ssc.sparkContext.textFile(propConf.getString(GlobalConstants.API_FILE))
    val arrayApi = ssc.sparkContext.broadcast(apiFile.distinct().collect())

    val nonApiFile = ssc.sparkContext.textFile(propConf.getString(GlobalConstants.NON_API_FILE))
    val arrayNonApi = ssc.sparkContext.broadcast(nonApiFile.distinct().collect())


    val messages = KafkaUtils.createDirectStream[ String, String, StringDecoder, StringDecoder ](ssc, kafkaParams, topicsSet)
    writeTOHDFS2(messages)
    ssc.start()
    ssc.awaitTermination()
  }



  def writeTOHDFS2(messages: DStream[ (String, String) ]): Unit = {
    val records = messages.window(Seconds(10), Seconds(10))
    val k = records.transform( rdd => rdd.map(r =>r._2)).filter(x=> filterNullImpressions(x))

    k.foreachRDD { singleRdd =>
      if (singleRdd.count() > 0) {


        val maps =  singleRdd.map(line => line.split("\n").flatMap(x => x.split(",")).flatMap(x => x.split("=")).foreach( x => new mutable.HashMap().put(x(0),x(1))) 


        val r = scala.util.Random
        val sdf = new SimpleDateFormat("yyyy/MM/dd/HH/mm")
        maps.saveAsTextFile("hdfs://localhost:8001/user/hadoop/spark/" + sdf.format(new Date())+r.nextInt)
      }
    }

  }

}

1 个答案:

答案 0 :(得分:3)

这里的一些代码应该是非常自我解释的。

val lines = "k1=v1,k2=v2,k3=v3\nk1=v1,k2=v2\nk1=v1,k2=v2,k3=v3,k4=v4"

val maps = lines.split("\n")
 .map(line => line.split(",")
 .map(kvPairString => kvPairString.split("="))
 .map(kvPairArray => (kvPairArray(0), kvPairArray(1))))
 .map(_.toMap)

// maps is of type Array[Map[String, String]]

println(maps.mkString("\n"))

//  prints:
//  Map(k1 -> v1, k2 -> v2, k3 -> v3)
//  Map(k1 -> v1, k2 -> v2)
//  Map(k1 -> v1, k2 -> v2, k3 -> v3, k4 -> v4)

建议 - SO不是“为我编写代码”平台。我知道很难深入Scala和Spark,但下次请尝试自己解决问题并发布到目前为止您尝试的内容以及遇到的问题。