从Java代码启动Spark程序时得到UnsatisfiedLinkError

时间:2016-07-15 01:28:59

标签: apache-spark hadoop2

我正在使用SparkLauncher从Java启动我的spark应用程序。代码看起来像

Map<String, String> envMap = new HashMap<>();

        envMap.put("HADOOP_CONF_DIR","/etc/hadoop/conf");
        envMap.put("JAVA_LIBRARY_PATH", "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop/lib/native");
        envMap.put("LD_LIBRARY_PATH", "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop/lib/native");
        envMap.put("SPARK_HOME","/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark");
        envMap.put("DEFAULT_HADOOP_HOME","/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop");
        envMap.put("SPARK_DIST_CLASSPATH","all jars under /opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/jars");
        envMap.put("HADOOP_HOME","/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop");


        SparkLauncher sparklauncher = new SparkLauncher(envMap)
                .setAppResource("myapp.jar")
                .setSparkHome("/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/")
                .setMainClass("spark.HelloSpark")
                .setMaster("yarn-cluster")
                .setConf(SparkLauncher.DRIVER_MEMORY, "2g")
                .setConf("spark.driver.userClassPathFirst", "true")
                .setConf("spark.executor.userClassPathFirst", "true").launch();

每一次,我都有了

  

用户类抛出异常:java.lang.UnsatisfiedLinkError:   org.xerial.snappy.SnappyNative.maxCompressedLength(I)I

1 个答案:

答案 0 :(得分:0)

看起来您的jar包含与群集中其他库冲突的Spark / Hadoop库。检查您的Spark和Hadoop依赖项是否已标记为已提供。