spark2 + yarn - 在准备AM容器时的nullpointerexception

时间:2016-08-15 10:27:25

标签: apache-spark pyspark yarn hadoop2

我试图运行

pyspark --master yarn
  • Spark版本:2.0.0
  • Hadoop版本:2.7.2
  • Hadoop纱网界面是 成功开始

这就是:

16/08/15 10:00:12 DEBUG Client: Using the default MR application classpath: $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
16/08/15 10:00:12 INFO Client: Preparing resources for our AM container
16/08/15 10:00:12 DEBUG Client: 
16/08/15 10:00:12 DEBUG DFSClient: /user/mispp/.sparkStaging/application_1471254869164_0006: masked=rwxr-xr-x
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #8
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #8
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: mkdirs took 14ms
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #9
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #9
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: setPermission took 10ms
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #10
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #10
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: getFileInfo took 2ms
16/08/15 10:00:12 INFO Client: Deleting staging directory hdfs://sm/user/mispp/.sparkStaging/application_1471254869164_0006
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #11
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #11
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: delete took 14ms
16/08/15 10:00:12 ERROR SparkContext: Error initializing SparkContext.
java.lang.NullPointerException
        at scala.collection.mutable.ArrayOps$ofRef$.newBuilder$extension(ArrayOps.scala:190)
        at scala.collection.mutable.ArrayOps$ofRef.newBuilder(ArrayOps.scala:186)
        at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:246)
        at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
        at scala.collection.mutable.ArrayOps$ofRef.filter(ArrayOps.scala:186)
        at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$6.apply(Client.scala:484)
        at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$6.apply(Client.scala:480)
        at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:74)
        at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:480)
        at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:834)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:167)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:240)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:236)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.GatewayConnection.run(GatewayConnection.java:211)
        at java.lang.Thread.run(Thread.java:745)
16/08/15 10:00:12 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.server.Server@69e507eb
16/08/15 10:00:12 DEBUG Server: Graceful shutdown org.spark_project.jetty.server.Server@69e507eb by 

纱-site.xml中: (最后一个属性是我在网上找到的,所以只要尝试它就可以了)

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>sm:8025</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>sm:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>sm:8050</value>
    </property>
    <property>
        <name>yarn.application.classpath</name>
        <value>/home/mispp/hadoop-2.7.2/share/hadoop/yarn</value>
    </property>
</configuration>

.bashrc中:

export HADOOP_PREFIX=/home/mispp/hadoop-2.7.2
export PATH=$PATH:$HADOOP_PREFIX/bin
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export YARN_CONF_DIR=$HADOOP_PREFIX/etc/hadoop

知道为什么会这样吗? 它在一个16GB RAM的服务器上设置了3个LXD容器(主机+两个计算机)。

2 个答案:

答案 0 :(得分:3)

给出Spark 2.0.0代码中错误的位置:

https://github.com/apache/spark/blob/v2.0.0/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala#L480

我怀疑由于spark.yarn.jars配置错误而导致错误发生。根据{{​​3}}的文档,我会仔细检查您的设置中此配置的值是否正确。

答案 1 :(得分:0)

我刚刚提高了@tinfoiled的答案,但我想在这里评论一下spark.yarn.jars(它以's'结尾)属性的语法,因为我花了很长时间搞清楚。

正确的语法(哪个OP已经知道)是 -

spark.yarn.jars=hdfs://xxx:9000/user/spark/share/lib/*.jar

实际上我最终没有放入* .jar,导致“无法加载ApplicationMaster”。我尝试了各种组合,但它没有用。事实上,我在Property spark.yarn.jars - how to deal with it?

上针对同一问题在SOF上发布了一个问题

我甚至不确定我正在做的事情是否正确,但OP的问题和@tinfoiled的答案给了我一些信心,我终于能够利用这个属性了。