运行Apache Spark Kafka Stream时获取Hadoop OutputFormat RunTimeException

时间:2016-07-21 11:50:07

标签: java scala hadoop apache-spark apache-kafka

我正在运行一个程序,该程序使用Apache Spark从Apache Kafka集群获取数据并将数据放入Hadoop文件中。我的计划如下:

public final class SparkKafkaConsumer {
    public static void main(String[] args) {
        SparkConf sparkConf = new SparkConf().setAppName("JavaKafkaWordCount");
        JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
        Map<String, Integer> topicMap = new HashMap<String, Integer>();
        String[] topics = "Topic1, Topic2, Topic3".split(",");
        for (String topic: topics) {
            topicMap.put(topic, 3);
        }
        JavaPairReceiverInputDStream<String, String> messages =
                KafkaUtils.createStream(jssc, "kafka.test.com:2181", "NameConsumer", topicMap);
        JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
            public String call(Tuple2<String, String> tuple2) {
                return tuple2._2();
            }
        });
        JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
            public Iterable<String> call(String x) {
                return Lists.newArrayList(",".split(x));
            }
        });
        JavaPairDStream<String, Integer> wordCounts = words.mapToPair(
                new PairFunction<String, String, Integer>() {
                    public Tuple2<String, Integer> call(String s) {
                        return new Tuple2<String, Integer>(s, 1);
                    }
                }).reduceByKey(new Function2<Integer, Integer, Integer>() {
                    public Integer call(Integer i1, Integer i2) {
                        return i1 + i2;
                    }
                });
        wordCounts.print();
        wordCounts.saveAsHadoopFiles("hdfs://localhost:8020/user/spark/stream/", "txt");
        jssc.start();
        jssc.awaitTermination();
    }
}

我正在使用此命令提交申请:C:\spark-1.6.2-bin-hadoop2.6\bin\spark-submit --packages org.apache.spark:spark-streaming-kafka_2.10:1.6.2 --class "SparkKafkaConsumer" --master local[4] target\simple-project-1.0.jar

我收到此错误:java.lang.RuntimeException: class scala.runtime.Nothing$ not org.apache.hadoop.mapred.OutputFormat at org.apache.hadoop.conf.Configuration.setClass(Configuration.java:2148)

导致此错误的原因是什么?如何解决?

2 个答案:

答案 0 :(得分:3)

我同意错误并不是真正令人回味,但通常最好在任何saveAsHadoopFile方法中指定要输出的数据格式,以保护自己免受此类异常的影响。

以下是文档中特定方法的原型:

saveAsHadoopFiles(java.lang.String prefix, java.lang.String suffix, java.lang.Class<?> keyClass, java.lang.Class<?> valueClass, java.lang.Class<F> outputFormatClass)

在您的示例中,这将对应于:

wordCounts.saveAsHadoopFiles("hdfs://localhost:8020/user/spark/stream/", "txt", Text.class, IntWritable.class, TextOutputFormat.class)

根据wordCounts PairDStream的格式,我选择Text作为密钥属于String类型,IntWritable作为与密钥相关联的值是输入Integer

如果您只想要基本的纯文本文件,请使用TextOutputFormat,但您可以查看FileOutputFormat的子类以获取更多输出选项。

由于还要求,Text类来自org.apache.hadoop.io包,TextOutputFormat来自org.apache.hadoop.mapred包。

答案 1 :(得分:1)

为了完整(@Jonathan给出了正确答案)

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.TextOutputFormat;

...
wordCounts.saveAsHadoopFiles("hdfs://localhost:8020/user/spark/stream/", "txt", Text.class, IntWritable.class, TextOutputFormat.class)