Spark流程程序陷入了简化转型

时间:2016-06-18 08:57:47

标签: java spark-streaming

我正在努力为这个问题找到解决方案。

所以基本上,我想在RDD和Inside上进行第一次处理,处理后调用一个函数进行第二次处理,然后将其值返回到第一次处理以继续其进程。

问题是当尝试通过reduce或foreach方法从第二个进程获取值时,程序会卡住。 如果我不使用其中一种转换,该程序将正常工作。

这是日志:

2016-06-18 08:04:51,310 [main] INFO  org.apache.spark.util.Utils - Successfully started service 'sparkDriver' on port 44493.
2016-06-18 08:04:51,622 [sparkDriverActorSystem-akka.actor.default-dispatcher-4] INFO  akka.event.slf4j.Slf4jLogger - Slf4jLogger started
2016-06-18 08:04:51,668 [sparkDriverActorSystem-akka.actor.default-dispatcher-4] INFO  Remoting - Starting remoting
2016-06-18 08:04:51,831 [sparkDriverActorSystem-akka.actor.default-dispatcher-4] INFO  Remoting - Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@142.100.1.5:45617]
2016-06-18 08:04:51,841 [main] INFO  org.apache.spark.util.Utils - Successfully started service 'sparkDriverActorSystem' on port 45617.
2016-06-18 08:04:51,861 [main] INFO  org.apache.spark.SparkEnv - Registering MapOutputTracker
2016-06-18 08:04:51,886 [main] INFO  org.apache.spark.SparkEnv - Registering BlockManagerMaster
2016-06-18 08:04:51,999 [main] INFO  org.apache.spark.SparkEnv - Registering OutputCommitCoordinator
2016-06-18 08:04:52,230 [main] INFO  org.spark-project.jetty.server.Server - jetty-8.y.z-SNAPSHOT
2016-06-18 08:04:52,280 [main] INFO  org.spark-project.jetty.server.AbstractConnector - Started SelectChannelConnector@0.0.0.0:4040
2016-06-18 08:04:52,280 [main] INFO  org.apache.spark.util.Utils - Successfully started service 'SparkUI' on port 4040.
2016-06-18 08:04:52,282 [main] INFO  org.apache.spark.ui.SparkUI - Started SparkUI at http://142.100.1.5:4040
2016-06-18 08:04:52,384 [main] INFO  org.apache.spark.util.Utils - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37259.
2016-06-18 08:04:52,385 [main] INFO  org.apache.spark.network.netty.NettyBlockTransferService - Server created on 37259
2016-06-18 08:04:53,201 [main] INFO  org.apache.spark.SparkContext - Created broadcast 0 from textFile at HelloSparkWorld.java:31
2016-06-18 08:04:53,323 [main] INFO  org.apache.spark.SparkContext - Starting job: foreach at HelloSparkWorld.java:43
2016-06-18 08:04:53,422 [dag-scheduler-event-loop] INFO  org.apache.spark.SparkContext - Created broadcast 1 from broadcast at DAGScheduler.scala:1006
2016-06-18 08:04:53,908 [Executor task launch worker-0] INFO  org.apache.hadoop.mapred.FileInputFormat - Total input paths to process : 1
2016-06-18 08:04:53,945 [Executor task launch worker-0] INFO  org.apache.spark.SparkContext - Starting job: reduce at HelloSparkWorld.java:73
2016-06-18 08:04:53,962 [dag-scheduler-event-loop] INFO  org.apache.spark.SparkContext - Created broadcast 2 from broadcast at DAGScheduler.scala:1006
2016-06-18 08:04:53,971 [dag-scheduler-event-loop] INFO  org.apache.spark.SparkContext - Created broadcast 3 from broadcast at DAGScheduler.scala:1006

代码:

public class HelloSparkWorld {

    private static JavaSparkContext sc;
    private static JavaPairRDD<String, Float> doc;
    public static void main(String... argv) {

        //Create a spark config with a single local instance
        SparkConf config = new SparkConf().setAppName("HelloSparkWorld").setMaster("local[1]");

        //Create a context
        sc = new JavaSparkContext(config);

        //Load join doc
        doc = sc
                .textFile("src/main/resources/file.txt")
                .mapToPair(s -> new Tuple2<>(s.split("\\t+")[0], Float.parseFloat(s.split("\\t+")[1])));

        //First computation making call to getNumber to process "line" via second computation
        String line = "hello world";
        ArrayList<String> l = new ArrayList<String>();
        l.add(line);
            sc
                .parallelize(l)
                .flatMap(s -> Arrays.asList(s.split("\\s+")))
                .mapToPair(s -> new Tuple2<>(s, getNumber(line)))
                    .foreach(s -> System.out.println(s._1()));
    }

    public static float getNumber(String text) {
        ArrayList<String> l = new ArrayList<String>();
        l.add(text);


        //Second Computation
        Tuple2<String, Tuple2<Integer, Float>> tuple = sc
                .parallelize(l)
                .flatMap(s -> Arrays.asList(s.split("\\s+")))
                .mapToPair(s -> new Tuple2<>(s, 1))
                .join(doc)
                .reduce( (a, b) -> new Tuple2<>(a._1(), new Tuple2<>(a._2()._1()+b._2()._1(), a._2()._2()+b._2()._2())));

        float avg = tuple._2()._2()/tuple._2()._1();
        System.out.println(avg);
        return avg;
    }
}

0 个答案:

没有答案