以纱线模式启动火花

时间:2020-03-11 17:28:05

标签: apache-spark memory yarn

./spark-shell --master yarn

我在主节点和从节点上运行命令,它们都犯了相同的错误:

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:94)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:183)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:501)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)

我的想法: 我认为可能是内存不足,但我感觉不是(我的计算机内存为8 GB),我总共打开了5个虚拟机,其中4个虚拟机每个提供了大约2 GB,一个虚拟机提供了大约1.6 GB,但是这些内存应该不会全部用光。 每个虚拟机的过程:

Master001:

1264 NameNode
1537 DFSZKFailoverController
1730 ResourceManager
2189 Jps

Master002:

1139 NameNode
2009 Jps
1211 DFSZKFailoverController

从属001:

1669 NodeManager
1335 QuorumPeerMain
2648 Jps
1513 JournalNode
1437 DataNode

从属002:

1139 QuorumPeerMain
2394 Jps
1438 JournalNode
1535 NodeManager
1247 DataNode

从属003:

1316 JournalNode
1237 DataNode
1465 NodeManager
1663 Jps
1135 QuorumPeerMain

博客写的不是很好,希望正确,谢谢

0 个答案:

没有答案