我正在运行一个带有两个输入文件和一个从Amazon S3存储桶中获取的jar文件的Spark应用程序。我正在使用AWS CLI创建一个群集,其中instance type
为m5.12xlarge
,instance-count
为11
,spark属性为:
--deploy-mode cluster
--num-executors 10
--executor-cores 45
--executor-memory 155g
我的Spark作业运行了一段时间,然后失败并自动重新启动,再次运行了一段时间,然后显示了此诊断信息(从日志中拉出)
diagnostics: Application application_1557259242251_0001 failed 2 times due to AM Container for appattempt_1557259242251_0001_000002 exited with exitCode: -104
Failing this attempt.Diagnostics: Container [pid=11779,containerID=container_1557259242251_0001_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 3.5 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1557259242251_0001_02_000001 :
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Exception in thread "main" org.apache.spark.SparkException: Application application_1557259242251_0001 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1165)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1520)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/05/07 20:03:35 INFO ShutdownHookManager: Shutdown hook called
19/05/07 20:03:35 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-3deea823-45e5-4a11-a5ff-833b01e6ae79
19/05/07 20:03:35 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-d6c3f8b2-34c6-422b-b946-ad03b1ee77d6
Command exiting with ret '1'
我无法找出问题所在?
我尝试更改实例类型或降低执行程序的内存和执行程序核心,但是仍然继续出现相同的问题。 有时,相同的配置设置会成功终止集群并生成结果,但是很多时候会生成这些错误。
有人可以帮忙吗?
答案 0 :(得分:0)
如果要为Spark作业提供多个输入文件。制作一个罐子,然后执行它。
第1步:如何制作zip文件
zip abc.zip file1.py file2.py
第2步:使用zip文件执行作业
spark2-submit --master yarn --deploy-mode cluster --py-files /home/abc.zip /home/main_program_file.py