Spark:YARN在NettyMemoryMetrics上抛出NoSuchMethodError

时间:2018-05-15 22:42:19

标签: apache-spark hadoop yarn

为了让Spark(spark-2.3.0-bin-without-hadoop)在HDFS上与YARN一起工作,我将Hadoop降级为hadoop-2.7.6以解决依赖问题。

到目前为止,HDFS和YARN都没有问题。

当我提交Spark Jar时,它崩溃了,我得到以下Stacktrace:

Exception in thread "main" java.lang.NoSuchMethodError: io.netty.buffer.PooledByteBufAllocator.metric()Lio/netty/buffer/PooledByteBufAllocatorMetric;
    at org.apache.spark.network.util.NettyMemoryMetrics.registerMetrics(NettyMemoryMetrics.java:80)
    at org.apache.spark.network.util.NettyMemoryMetrics.<init>(NettyMemoryMetrics.java:76)
    at org.apache.spark.network.client.TransportClientFactory.<init>(TransportClientFactory.java:109)
    at org.apache.spark.network.TransportContext.createClientFactory(TransportContext.java:99)
    at org.apache.spark.rpc.netty.NettyRpcEnv.<init>(NettyRpcEnv.scala:71)
    at org.apache.spark.rpc.netty.NettyRpcEnvFactory.create(NettyRpcEnv.scala:461)
    at org.apache.spark.rpc.RpcEnv$.create(RpcEnv.scala:57)
    at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:515)
    at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:347)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply$mcV$sp(ApplicationMaster.scala:260)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply(ApplicationMaster.scala:260)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply(ApplicationMaster.scala:260)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$5.run(ApplicationMaster.scala:800)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
    at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:799)
    at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:259)
    at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:824)
    at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:854)
    at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
> 

当我开始自己的程序和Spark提供的示例时,都会发生这种情况。 我很确定我在spark-env.sh中正确设置了类路径:

export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath):/usr/local/spark/jars/*

生成的类路径如下所示:

SPARK_DIST_CLASSPATH='/usr/local/hadoop/etc/hadoop:/usr/local/hadoop//share/hadoop/common/lib/*:/usr/local/hadoop//share/hadoop/common/*:/usr/local/hadoop//share/hadoop/hdfs:/usr/local/hadoop//share/hadoop/hdfs/lib/*:/usr/local/hadoop//share/hadoop/hdfs/*:/usr/local/hadoop-2.7.6/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.7.6/share/hadoop/yarn/*:/usr/local/hadoop//share/hadoop/mapreduce/lib/*:/usr/local/hadoop//share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/usr/local/spark/jars/*'

我不知道如何解决这个问题。除了配置问题,我最好的猜测是另一个库不兼容。 在这种情况下,任何人都可以指向一个实际上没有冲突的Spark / Hadoop组合吗?

find . -name netty*
./spark-2.3.0-bin-without-hadoop/jars/netty-3.9.9.Final.jar
./spark-2.3.0-bin-without-hadoop/jars/netty-all-4.1.17.Final.jar
./hadoop-2.7.6/share/hadoop/yarn/lib/netty-3.6.2.Final.jar
./hadoop-2.7.6/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/netty-3.6.2.Final.jar
./hadoop-2.7.6/share/hadoop/tools/lib/netty-3.6.2.Final.jar
./hadoop-2.7.6/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar
./hadoop-2.7.6/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
./hadoop-2.7.6/share/hadoop/common/lib/netty-3.6.2.Final.jar
./hadoop-2.7.6/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/netty-3.6.2.Final.jar
./hadoop-2.7.6/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/netty-all-4.0.23.Final.jar
./hadoop-2.7.6/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar

1 个答案:

答案 0 :(得分:0)

我已经解决了问题,答案很简单:spark.yarn.jars属性不必设置为/usr/local/spark/jar,而是设置为/usr/local/spark/jar/*,一切正常。