如何保存文本文件在JAVA中从dstream排序的前N个

时间:2015-03-25 22:54:57

标签: java apache-spark spark-streaming

我有一个排序的dstream,我可以打印如下

     sorted.foreach(
       new Function<JavaPairRDD<Double,String>, Void>(){
           public Void call(JavaPairRDD<Double, String> rdd){
               String out = "\n Top Values: \n";           
               for (Tuple2<Double, String> t: rdd.take(10)){
                   out = out + t.toString() + "\n";                    
               }
               System.out.println(out);
               return null;
           }});

但是,我想将其保存为文本文件,而不是仅打印10个值。 *请注意,我想保留文本文件只有十大价值,而不是整个dstream

我会感激任何帮助。我也用Java编写代码,而不是scala。

2 个答案:

答案 0 :(得分:1)

假设您的输入已排序&amp;在scala中完成:

val location = "hdfs://..."
val target = 10
sorted.foreachRDD({rdd, time =>
    // Determine how many elements preceded each partition.
    val partitionElemCounts = rdd.mapPartitions(items => 
      List(items.size)).collect().scanLeft(0) { case (sum,e) => sum+e}
    // Get the number of elements in each partition we need
    val nRdd = rdd.mapPartitionsWithIndex { items, partition =>
         items.take(max(0, target-partitionElemCounts(partition)))
    }
    // we append the time to the path so each segment is written out to a different directory
    val out = location + time
    nRdd.saveAsTextFile(out)
  }
});

答案 1 :(得分:0)

你可以这样做。

object DStreamTopN {

def main(args:Array [String]){

StreamingExamples.setStreamingLogLevels()

val sparkConf = new SparkConf().setAppName("DStreamTopN").setMaster("local[3]")
val ssc = new StreamingContext(sparkConf, Seconds(5))

ssc.checkpoint("/tmp/checkpoint")

val lines = ssc.receiverStream(new UdpReceiver(1514, "UTF-8"))

val wc = lines.flatMap(_.split(" ")).map(_ -> 1).reduceByKey(_ + _)

val sort = wc.transform((rdd: RDD[(String, Int)]) => {
  val topN = rdd.sortBy(_._2, false).take(3)
  rdd.sparkContext.makeRDD(topN)
})

sort.foreachRDD(_.foreach(println))

ssc.start()
ssc.awaitTermination()

} }