通过分隔符拆分Spark流

时间:2015-07-22 15:54:10

标签: scala apache-spark spark-streaming

我试图根据分隔符拆分我的Spark流,并将每个块保存到一个新文件中。

我的每个RDD似乎都根据分隔符进行了分区。

我很难为每个RDD配置一个分隔符消息,或者能够将每个分区单独保存到新的part-000...文件。

非常感谢任何帮助。谢谢

 val sparkConf = new SparkConf().setAppName("DataSink").setMaster("local[8]").set("spark.files.overwrite","false")
 val ssc = new StreamingContext(sparkConf, Seconds(2))

 class RouteConsumer extends Actor with ActorHelper with Consumer {
    def endpointUri = "rabbitmq://server:5672/myexc?declare=false&queue=in_hl7_q"
    def receive = {
        case msg: CamelMessage =>
           val m = msg.withBodyAs[String]
           store(m.body)
     }
 }

 val dstream = ssc.actorStream[String](Props(new RouteConsumer()), "SparkReceiverActor")
 val splitStream = dstream.flatMap(_.split("MSH|^~\\&"))
 splitStream.foreachRDD( rdd => rdd.saveAsTextFile("file:///home/user/spark/data") )

 ssc.start()
 ssc.awaitTermination()

1 个答案:

答案 0 :(得分:2)

您无法控制哪个part-NNNNN(分区)文件获取哪个输出,但您可以写入不同的目录。执行此类列拆分的“最简单”方法是使用单独的map语句(如SELECT语句),类似这样,假设在拆分后您将拥有n个数组元素:

... val dstream2 = dstream.map(_.split("...")) // like above, but with map dstream2.cache() // very important for what follows, repeated reads of this... val dstreams = new Array[DStream[String]](n) for (i <- 0 to n-1) { dstreams[i] = dstream2.map(array => array[i] /* or similar */) dstreams[i].saveAsTextFiles(rootDir+"/"+i) } ssc.start() ssc.awaitTermination()