org.apache.spark.SparkException:任务不可序列化

时间:2015-03-27 08:00:56

标签: scala apache-spark apache-kafka

这是一个有效的代码示例:

JavaPairDStream<String, String> messages = KafkaUtils.createStream(javaStreamingContext, zkQuorum, group, topicMap);
messages.print();
JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
    @Override
    public String call(Tuple2<String, String> tuple2) {
        return tuple2._2();
    }
});

我收到以下错误:

ERROR:
org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:1435)
    at org.apache.spark.streaming.dstream.DStream.map(DStream.scala:438)
    at org.apache.spark.streaming.api.java.JavaDStreamLike$class.map(JavaDStreamLike.scala:140)
    at org.apache.spark.streaming.api.java.JavaPairDStream.map(JavaPairDStream.scala:46)

2 个答案:

答案 0 :(得分:15)

由于您使用匿名内部类定义了map函数,因此包含的类也必须是Serializable。将map函数定义为单独的类或使其成为静态内部类。从Java文档(http://docs.oracle.com/javase/8/docs/platform/serialization/spec/serial-arch.html):

  

注意 - 由于多种原因,强烈建议不要对内部类(即非静态成员类的嵌套类)进行序列化,包括本地类和匿名类。因为在非静态上下文中声明的内部类包含对包含类实例的隐式非瞬态引用,所以序列化这样的内部类实例也会导致其关联的外部类实例的序列化。

答案 1 :(得分:4)

只提供代码示例:

JavaDStream<String> lines = messages.map(mapFunc);

将内部类声明为静态变量:

static Function<Tuple2<String, String>, String> mapFunc=new Function<Tuple2<String, String>, String>() {
    @Override
    public String call(Tuple2<String, String> tuple2) {
        return tuple2._2();
    }
}