供参考:我通过在hadoop / share / hadoop / common中添加Netty 4.1.17解决了这个问题
无论我尝试运行哪种jar(包括https://spark.apache.org/docs/latest/running-on-yarn.html中的示例),在Yarn上运行Spark时,都会不断收到有关容器故障的错误消息。我在命令提示符下收到此错误:
Diagnostics: Exception from container-launch.
Container id: container_1530118456145_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
at org.apache.hadoop.util.Shell.run(Shell.java:482)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
当我查看日志时,我发现此错误:
Exception in thread "main" java.lang.NoSuchMethodError:io.netty.buffer.PooledByteBufAllocator.metric()Lio/netty/buffer/PooledByteBufAllocatorMetric;
at org.apache.spark.network.util.NettyMemoryMetrics.registerMetrics(NettyMemoryMetrics.java:80)
at org.apache.spark.network.util.NettyMemoryMetrics.<init>(NettyMemoryMetrics.java:76)
at org.apache.spark.network.client.TransportClientFactory.<init>(TransportClientFactory.java:109)
at org.apache.spark.network.TransportContext.createClientFactory(TransportContext.java:99)
at org.apache.spark.rpc.netty.NettyRpcEnv.<init>(NettyRpcEnv.scala:71)
at org.apache.spark.rpc.netty.NettyRpcEnvFactory.create(NettyRpcEnv.scala:461)
at org.apache.spark.rpc.RpcEnv$.create(RpcEnv.scala:57)
at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:530)
at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:347)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply$mcV$sp(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$2.apply(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$5.run(ApplicationMaster.scala:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:814)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:259)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:839)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:869)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
知道为什么会这样吗?它在根据本教程https://wiki.apache.org/hadoop/Hadoop2OnWindows设置的伪分布式集群上运行。 Spark在本地运行良好,并且由于该jar是随Spark提供的,我怀疑这是jar中的问题。 (无论如何,我在另一个jar中添加了Netty依赖项,但仍然遇到相同的错误。)
在spark-defaults.conf中唯一设置的是spark.yarn.jars,它指向一个hdfs目录,在该目录中我上传了所有Spark的jar。 io.netty.buffer.PooledByteBufAllocator包含在这些jar中。
Spark 2.3.1,Hadoop 2.7.6
答案 0 :(得分:4)
我有完全一样的问题。以前,我使用Hadoop 2.6.5和兼容的spark版本,一切正常。当我切换到Hadoop 2.7.6时,出现了问题。不知道是什么原因,但是我将其复制到netty.4.1.17.Final jar文件到hadoop库文件夹,然后问题消失了。
答案 1 :(得分:0)
好像您的类路径上有多个netty版本,
答案 2 :(得分:0)
这可能在纱线和火花之间存在版本问题。检查所安装版本的兼容性。
我强烈建议阅读更多有关NoSuchMethodError和其他类似Exception的信息,例如NoClassDefFoundError和ClassNotFoundException。这个建议的原因是,当您在不同情况下开始使用spark时,这些错误会更加令人困惑,并且人们没有那么异常的经历。 NosuchMethodError
对于程序员来说,当然,关怀很多是最佳实践策略,绝对是那些在Spark等分布式系统上工作的程序员。做得好。 ;)