我正在使用以下命令启动pyspark
./bin/pyspark --master yarn --deploy-mode client --executor-memory 5g
我收到以下错误
15/10/14 17:19:15 INFO spark.SparkContext: SparkContext already stopped.
Traceback (most recent call last):
File "/opt/spark-1.5.1/python/pyspark/shell.py", line 43, in <module>
sc = SparkContext(pyFiles=add_files)
File "/opt/spark-1.5.1/python/pyspark/context.py", line 113, in __init__
conf, jsc, profiler_cls)
File "/opt/spark-1.5.1/python/pyspark/context.py", line 178, in _do_init
self._jvm.PythonAccumulatorParam(host, port))
File "/opt/spark-1.5.1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 701, in __call__
File "/opt/spark-1.5.1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.python.PythonAccumulatorParam.
: java.lang.NullPointerException
at org.apache.spark.api.python.PythonAccumulatorParam.<init>(PythonRDD.scala:825)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
出于某种原因,我也收到了这条消息
ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
和
WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@192.168.1.112:48644] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
这可能就是我SparkSetext停止的原因。
我使用Spark 1.5.1和Hadoop 2.7.1和Yarn 2.7。
有没有人知道为什么Yarn应用程序会在发生任何事情之前退出?
有关其他信息,请参阅我的yarn-site.xml
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>26624</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>26624</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
这是我的mapred-site.xml
<property>
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx1640M</value>
<description>Heap size for map jobs.</description>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>16384</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx13107M</value>
<description>Heap size for reduce jobs.</description>
</property>
答案 0 :(得分:2)
我能够通过添加
来解决这个问题spark.yarn.am.memory 5g
到spark-default.conf文件。
我认为这是一个与记忆相关的问题。
此参数的默认值为512米
答案 1 :(得分:1)
我有一个类似的问题,当我查看端口8088上的Hadoop GUI并点击我的PySpark作业的ID列中的应用程序链接时,我看到以下错误:
未捕获的异常:org.apache ... InvalidResourceRequestException无效的资源请求,请求的虚拟核心&lt; 0,或请求的虚拟核&gt; max configured,requestedVirtualCores = 8,maxVirtualCores = 1
如果我将脚本更改为使用--executor-cores 1
而不是默认值(--executor-cores 8
),那么它就可以了。现在我只需让管理员更改一些Yarn设置以允许更多内核,例如yarn.scheduler.maximum-allocation-vcores
,请参阅https://stackoverflow.com/a/29789568/215945