Spark Standalone Cluster:TaskSchedulerImpl:初始作业未接受任何资源

时间:2017-02-23 13:06:16

标签: apache-spark

我在Amazon EC2中有一个群集通过以下方式进行压缩: - 师父:t2.large - 2xSlaves:t2.micro

我只需更改spark-env.sh中的端口:

export SPARK_MASTER_WEBUI_PORT=8888

在奴隶文件中,我写了两个奴隶IP。

我设置的所有配置。之后,我使用./start-all运行,我可以在8888端口看到我的主人。

但是当我尝试运行应用程序时,我得到以下警告:

17/02/23 13:57:02 INFO TaskSchedulerImpl: Adding task set 0.0 with 6 tasks
17/02/23 13:57:17 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/02/23 13:57:32 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/02/23 13:57:47 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

当我检查我的群集时,我可以看到spark kill executor是如何创建一个新的。我尝试使用更好的计算机,但仍然无法正常工作。 怎么了?我该如何解决?

0 个答案:

没有答案