apache spark

时间:2017-08-09 10:26:57

标签: java apache-spark

我正在尝试使用spark-submit运行spark作业。当我在eclipse中运行它时,作业运行没有任何问题。当我将相同的jar文件复制到远程计算机并在那里运行作业时,我得到以下问题

17/08/09 10:19:15 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-10-50-70-180.ec2.internal): java.io.InvalidClassException: org.apache.spark.executor.TaskMetrics; local class incompatible: stream classdesc serialVersionUID = -2231953621568687904, local class serialVersionUID = -6966587383730940799
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1829)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1986)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:253)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

我在SO中看到了一些其他链接并尝试了下面的

  1. 我之前使用的2.10将spark jar的版本更改为2.11。现在pom中的依赖关系看起来像这样

     <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.0.2</version>
        <scope>provided</scope>
    
    </dependency>
    
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>2.0.2</version>
        <scope>provided</scope>
    </dependency>
    
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-yarn_2.10 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-yarn_2.11</artifactId>
        <version>2.0.2</version>
        <scope>provided</scope>
    </dependency>
    
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-mllib_2.11</artifactId>
        <version>2.0.2</version>
        <scope>provided</scope>
    </dependency>
    
  2. 我还检查了版本2.11-2.0.2是否存在于spark的jars文件夹中,如一些链接所示。

  3. 我还在几个链接中建议的依赖项中添加了

  4. 以上都没有帮助。由于我遇到了这个问题,任何帮助都会有很大的帮助。提前致谢。干杯

    编辑1:这是spark-submit命令

    spark-submit --deploy-mode cluster --class "com.abc.ingestion.GenericDeviceIngestionSpark" /home/hadoop/sathiya/spark_driven_ingestion-0.0.1-SNAPSHOT-jar-with-dependencies.jar "s3n://input-bucket/input-file.csv" "SIT" "accessToken" "UNKNOWN" "bundleId" "[{"idType":"D_ID","idOrder":1,"isPrimary":true},{"idType":"HASH_DEVICE_ID","idOrder":2,"isPrimary":false}]"
    

    编辑2:

    我也尝试将变量 serialVersionUID = -2231953621568687904L; 添加到相关类但未解决问题

2 个答案:

答案 0 :(得分:1)

我终于解决了这个问题。我注释掉了所有依赖项并一次取消注释。首先,我取消注释 spark_core 依赖项,问题得到解决。我在项目中取消注释了另一个依赖项,这再次带回了这个问题。然后在调查中我发现第二个依赖关系反过来又依赖于导致问题的不同版本(2.10)的spark_core。我在下面添加了对依赖项的排除:

<dependency>
        <groupId>com.data.utils</groupId>
        <artifactId>data-utils</artifactId>
        <version>1.0-SNAPSHOT</version>
        <exclusions>
            <exclusion>
                <groupId>javax.ws.rs</groupId>
                <artifactId>javax.ws.rs-api</artifactId>
            </exclusion>
            <exclusion>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-core_2.10</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

这解决了这个问题。以防万一有人陷入这个问题。感谢@JosePraveen提供的宝贵意见,给了我一些提示。

答案 1 :(得分:0)

当Spark主站和1个或多个Spark从站使用的jar版本略有不同时,我们会看到此问题。

我遇到了这个问题,因为我只将jar复制到了主节点。将jar复制到所有从属节点后,我的应用程序开始正常运行。