必须包含log4J,但它会导致Apache Spark shell出错。如何避免错误?

时间:2015-03-31 17:41:39

标签: scala log4j apache-spark type-mismatch

由于我必须包含在Spark代码中的jar的复杂性,我想请求帮助找出解决此问题的方法而不删除log4j导入。

简单的代码如下:

    :cp symjar/log4j-1.2.17.jar
import org.apache.spark.rdd._

      val hadoopConf=sc.hadoopConfiguration;
      hadoopConf.set("fs.s3n.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
      hadoopConf.set("fs.s3n.awsAccessKeyId","AKEY")
      hadoopConf.set("fs.s3n.awsSecretAccessKey","SKEY") 
    val numOfProcessors = 2
    val filePath = "s3n://SOMEFILE.csv"
    var rdd = sc.textFile(filePath, numOfProcessors)
    def doStuff(rdd: RDD[String]): RDD[String] = {rdd}
    doStuff(rdd)

首先,我收到此错误:

error: error while loading StorageLevel, class file '/root/spark/lib/spark-assembly-1.3.0-hadoop1.0.4.jar(org/apache/spark/storage/StorageLevel.class)' has location not matching its contents: contains class StorageLevel
error: error while loading Partitioner, class file '/root/spark/lib/spark-assembly-1.3.0-hadoop1.0.4.jar(org/apache/spark/Partitioner.class)' has location not matching its contents: contains class Partitioner
error: error while loading BoundedDouble, class file '/root/spark/lib/spark-assembly-1.3.0-hadoop1.0.4.jar(org/apache/spark/partial/BoundedDouble.class)' has location not matching its contents: contains class BoundedDouble
error: error while loading CompressionCodec, class file '/root/spark/lib/spark-assembly-1.3.0-hadoop1.0.4.jar(org/apache/hadoop/io/compress/CompressionCodec.class)' has location not matching its contents: contains class CompressionCodec

然后,我再次运行此行,错误消失:

var rdd = sc.textFile(filePath, numOfProcessors)

但是,代码的最终结果是:

error: type mismatch;
 found   : org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD[String]
 required: org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD[String]
              doStuff(rdd)
                      ^

如何避免从导入中删除log4j而不会出现上述错误? (这很有意义,因为我使用的log4j大量使用并且与Spark-Shell冲突)。

2 个答案:

答案 0 :(得分:1)

答案不仅仅是使用:cp命令,而且还要在导出 SPARK_SUBMIT_CLASSPATH ="中添加包含... / spark / conf / spark-env.sh中的所有内容。 ./the/path/to/a.jar"

答案 1 :(得分:0)

另一个答案,如果使用IDE,例如Scala for Eclipse和maven,则是从maven中排除jar。例如,我想排除ommons-codec(然后将不同版本作为JAR包含到项目中)并将更改添加到pom.xml中:

...............
             <dependencies>
                            <dependency>
                                <groupId>org.apache.spark</groupId>
                                <artifactId>spark-core_2.11</artifactId>
                                <version>1.3.0</version>
                             <exclusions>
                            <exclusion>
                           <groupId>commons-codec</groupId>
                          <artifactId>commons-codec</artifactId>
                          <version>1.3</version>
                          </exclusion>
                          </exclusions>
                         </dependency>
                        </dependencies>
...............