Spark序列化的奇怪之处

时间:2015-05-27 00:58:19

标签: java serialization lambda apache-spark

我使用JavaPairRdd.repartitionAndrepartitionAndSortWithinPartitions方法解决了Spark的问题。我已经尝试过任何合理的人都会想到的一切。我终于编写了一个简单的小片段,以便可视化问题:

public class Main {
    public static void main(String[] args) {
        SparkConf conf = new SparkConf().setAppName("test").setMaster("local");
        JavaSparkContext sc = new JavaSparkContext(conf);

        final List<String> list = Arrays.asList("I", "am", "totally", "baffled");
        final HashPartitioner partitioner = new HashPartitioner(2);

        doSomething(sc, list, partitioner, String.CASE_INSENSITIVE_ORDER);
        doSomething(sc, list, partitioner, Main::compareString);
        doSomething(sc, list, partitioner, new StringComparator());
        doSomething(sc, list, partitioner, new SerializableStringComparator());
        doSomething(sc, list, partitioner, (s1,s2) -> Integer.compare(s1.charAt(0),s2.charAt(0)));
    }

    public static <T> void doSomething(JavaSparkContext sc, List<T> list, Partitioner partitioner, Comparator<T> comparator) {
        try {
            sc.parallelize(list)
                .mapToPair(elt -> new Tuple2<>(elt,elt))
                .repartitionAndSortWithinPartitions(partitioner,comparator)
                .count();
            System.out.println("success");
        } catch (Exception e) {
            System.out.println("failure");
        }
    }

    public static int compareString(String s1, String s2) {
        return Integer.compare(s1.charAt(0),s2.charAt(0));
    }

    public static class StringComparator implements Comparator<String> {
        @Override
        public int compare(String s1, String s2) {
            return Integer.compare(s1.charAt(0),s2.charAt(0));
        }
    }

    public static class SerializableStringComparator implements Comparator<String>, Serializable {
        @Override
        public int compare(String s1, String s2) {
            return Integer.compare(s1.charAt(0),s2.charAt(0));
        }
    }
}

除Spark记录外,还输出:

success
failure
failure 
success
failure

失败时抛出的异常总是一样的:

org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:240)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:150)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:158)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:58)
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:39)
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:835)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:781)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:780)
scala.collection.immutable.List.foreach(List.scala:318)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:780)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:781)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:780)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:780)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

现在我已经得到了修复:将我的自定义比较器声明为Serializable(我在标准库代码中检查过,字符串不区分大小写的比较器被声明为serializable,这是有意义的)。

但为什么呢?我为什么不在这里使用lambdas?我本来期望第二个和最后一个正常工作,因为我只使用静态方法和类。

我发现特别奇怪的是我已经注册了我尝试序列化到Kryo的类,而我没有注册的类可以使用它们的默认关联序列化器(Kryo将FieldSerializer作为默认值进行简单序列化一个大多数类)。但是,在任务序列化失败之前,Kryo registrator从未执行过。

1 个答案:

答案 0 :(得分:1)

我的问题并没有明确说明为什么我如此困惑(关于Kryo注册代码没有被执行),所以我编辑它以反映它。

我发现Spark使用两种不同的序列化器:

  • 用于将任务从主服务器序列化到从服务器的代码,在代码中称为closureSerializer(请参阅SparkEnv.scala)。它只能在我发布之日设置为JavaSerializer

  • 用于序列化已处理的实际数据,在serializer中称为SparkEnv。这个可以设置为JavaSerializer或“KryoSerializer。

将类注册到Kryo并不能确保它始终与Kryo一起序列化,这取决于您如何使用它。例如,DAGScheduler仅使用closureSerializer,因此无论您如何配置序列化,如果DAGScheduler在某些时候操作对象,您将始终需要使对象具有Java可序列化(除非Spark在以后的版本中为闭包启用了Kryo序列化。)