cloudera vm中的Spark 2.1.1异常纱线申请已经结束

时间:2018-10-14 21:13:27

标签: java scala apache-spark apache-spark-sql

我已经将spark版本从1.6更新到2.1.1。我已经更新了Java版本和Scala版本。但是当我启动spark-shell时,启动spark-shell时出现以下错误。请帮我解决这个问题。请发布一些步骤来纠正此错误。启动spark-shell时显示错误。

 18/10/16 11:59:37 WARN hdfs.DFSClient: DataStreamer Exception
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 

/user/cloudera/.sparkStaging/application_1539716150434_0001/ spark_conf .zip只能复制到0个节点,而不是minReplication(= 1)。有0个数据节点正在运行,并且此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1720)     在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3440)     在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:686)     在org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)     在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)     在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1073)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2226)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2222)     在java.security.AccessController.doPrivileged(本机方法)     在javax.security.auth.Subject.doAs(Subject.java:415)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:2220)

at org.apache.hadoop.ipc.Client.call(Client.java:1469)
at org.apache.hadoop.ipc.Client.call(Client.java:1400)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

18/10/16 11:59:37错误spark.SparkContext:初始化SparkContext时出错。 org.apache.hadoop.ipc.RemoteException(java.io.IOException):文件/user/cloudera/.sparkStaging/application_1539716150434_0001/spark_conf.zip只能复制到0个节点,而不是minReplication(= 1)。有0个数据节点正在运行,并且此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1720)     在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3440)     在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:686)     在org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)     在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)     在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1073)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2226)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2222)     在java.security.AccessController.doPrivileged(本机方法)     在javax.security.auth.Subject.doAs(Subject.java:415)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:2220)

at org.apache.hadoop.ipc.Client.call(Client.java:1469)
at org.apache.hadoop.ipc.Client.call(Client.java:1400)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

18/10/16 11:59:37 WARN cluster.YarnSchedulerBackend $ YarnSchedulerEndpoint:尝试在AM已注册之前请求执行者! 18/10/16 11:59:37错误util.Utils:线程main中未捕获的异常 java.lang.NullPointerException     在org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:152)     在org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1407)     在org.apache.spark.SparkEnv.stop(SparkEnv.scala:89)     在org.apache.spark.SparkContext $$ anonfun $ stop $ 11.apply $ mcV $ sp(SparkContext.scala:1849)     在org.apache.spark.util.Utils $ .tryLogNonFatalError(Utils.scala:1283)     在org.apache.spark.SparkContext.stop(SparkContext.scala:1848)     在org.apache.spark.SparkContext。(SparkContext.scala:587)     在org.apache.spark.SparkContext $ .getOrCreate(SparkContext.scala:2320)     在org.apache.spark.sql.SparkSession $ Builder $$ anonfun $ 6.apply(SparkSession.scala:868)     在org.apache.spark.sql.SparkSession $ Builder $$ anonfun $ 6.apply(SparkSession.scala:860)     在scala.Option.getOrElse(Option.scala:121)     在org.apache.spark.sql.SparkSession $ Builder.getOrCreate(SparkSession.scala:860)     在org.apache.spark.repl.Main $ .createSparkSession(Main.scala:96)     在$ line3。$ read $$ iw $$ iw。(:15)     在$ line3。$ read $$ iw。(:42)     在$ line3。$ read。(:44)     在$ line3。$ read $。(:48)     在$ line3。$ read $。()     在$ line3。$ eval $。$ print $ lzycompute(:7)     在$ line3。$ eval $。$ print(:6)     在$ line3。$ eval。$ print()     在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处     在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)     在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)     在java.lang.reflect.Method.invoke(Method.java:498)     在scala.tools.nsc。解释器.IMain $ ReadEvalPrint.call(IMain.scala:786)     在scala.tools.nsc。解释器.IMain $ Request.loadAndRun(IMain.scala:1047)     在scala.tools.nsc.interpreter.IMain $ WrappedRequest $$ anonfun $ loadAndRunReq $ 1.apply(IMain.scala:638)     在scala.tools.nsc.interpreter.IMain $ WrappedRequest $$ anonfun $ loadAndRunReq $ 1.apply(IMain.scala:637)     在scala.reflect.internal.util.ScalaClassLoader $ class.asContext(ScalaClassLoader.scala:31)     在scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)     在scala.tools.nsc。解释器.IMain $ WrappedRequest.loadAndRunReq(IMain.scala:637)     在scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)     在scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)     在scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)     在scala.tools.nsc。解释器.ILoop.command(ILoop.scala:681)     在scala.tools.nsc。解释器.ILoop.processLine(ILoop.scala:395)     在org.apache.spark.repl.SparkILoop $$ anonfun $ initializeSpark $ 1.apply $ mcV $ sp(SparkILoop.scala:38)     在org.apache.spark.repl.SparkILoop $$ anonfun $ initializeSpark $ 1.apply(SparkILoop.scala:37)     在org.apache.spark.repl.SparkILoop $$ anonfun $ initializeSpark $ 1.apply(SparkILoop.scala:37)     在scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)     在org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)     在org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:105)     在scala.tools.nsc.interpreter.ILoop $$ anonfun $ process $ 1.apply $ mcZ $ sp(ILoop.scala:920)     在scala.tools.nsc.interpreter.ILoop $$ anonfun $ process $ 1.apply(ILoop.scala:909)     在scala.tools.nsc.interpreter.ILoop $$ anonfun $ process $ 1.apply(ILoop.scala:909)     在scala.reflect.internal.util.ScalaClassLoader $ .savingContextLoader(ScalaClassLoader.scala:97)     在scala.tools.nsc。解释器.ILoop.process(ILoop.scala:909)     在org.apache.spark.repl.Main $ .doMain(Main.scala:69)     在org.apache.spark.repl.Main $ .main(Main.scala:52)     在org.apache.spark.repl.Main.main(Main.scala)     在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处     在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)     在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)     在java.lang.reflect.Method.invoke(Method.java:498)     在org.apache.spark.deploy.SparkSubmit $ .org $ apache $ spark $ deploy $ SparkSubmit $$ runMain(SparkSubmit.scala:743)     在org.apache.spark.deploy.SparkSubmit $ .doRunMain $ 1(SparkSubmit.scala:187)     在org.apache.spark.deploy.SparkSubmit $ .submit(SparkSubmit.scala:212)     在org.apache.spark.deploy.SparkSubmit $ .main(SparkSubmit.scala:126)     在org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) org.apache.hadoop.ipc.RemoteException:文件/user/cloudera/.sparkStaging/application_1539716150434_0001/spark_conf.zip只能复制到0个节点,而不是minReplication(= 1)。有0个数据节点正在运行,并且此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1720)     在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3440)     在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:686)     在org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)     在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)     在org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1073)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2226)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2222)     在java.security.AccessController.doPrivileged(本机方法)     在javax.security.auth.Subject.doAs(Subject.java:415)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:2220)

在org.apache.hadoop.ipc.Client.call(Client.java:1469)   在org.apache.hadoop.ipc.Client.call(Client.java:1400)   在org.apache.hadoop.ipc.ProtobufRpcEngine $ Invoker.invoke(ProtobufRpcEngine.java:232)   com.sun.proxy。$ Proxy13.addBlock(未知来源)   在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)   在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处   在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)   在java.lang.reflect.Method.invoke(Method.java:498)   在org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)   在org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)   在com.sun.proxy。$ Proxy14.addBlock(未知来源)   在org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)   在org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)   在org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.run(DFSOutputStream.java:588) :14:错误:未找到:值火花        导入spark.implicits._               ^ :14:错误:未找到:值火花        导入spark.sql           ^

0 个答案:

没有答案