Spark应用程序处于运行状态,初始作业未接受任何资源

时间:2019-08-23 20:35:30

标签: apache-spark yarn dl4j

我正在使用Apache Hadoop,Spark和DL4J进行分布式深度学习项目。

我的主要问题是在启动我的应用程序时,它进入运行状态,并且永远不会超过10%的进度 我收到此警告

2019-08-23 20:55:49,198 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1161
2019-08-23 20:55:49,224 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[5] at saveAsTextFile at BaseTrainingMaster.java:211) (first 15 tasks are for partitions Vector(0, 1))
2019-08-23 20:55:49,226 INFO cluster.YarnClusterScheduler: Adding task set 0.0 with 2 tasks
2019-08-23 20:56:04,286 WARN cluster.YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
2019-08-23 20:56:17,526 WARN cluster.YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
2019-08-23 20:56:23,135 WARN cluster.YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

最后3行保持不间断

实际上我只有1个主节点和1个从节点,并安装了 Hadoop Spark

  • Master 的Intel i5 6500内存为8GB
  • 从设备的内存为4GB, 英特尔i3 4400

检查 HDFS 的WebUI和日志文件后,我可以看到 HDFS 正常运行 Yarn WebUI和Logs还显示Yarn在1个 DATANODE

上运行良好

在这里您可以检查我的代码,看看它在哪里卡住

VoidConfiguration config = VoidConfiguration.builder()
            .unicastPort(40123)
            .networkMask("192.168.0.0/42")   
            .controllerAddress("192.168.1.35")  
            .build();

    log.log(Level.INFO,"==========After voidconf");

    //      Create the TrainingMaster instance
    TrainingMaster trainingMaster = new SharedTrainingMaster.Builder(config, 1)
            .batchSizePerWorker(10) 
            .workersPerNode(1)      
            .build();

    log.log(Level.INFO,"==========after training master");
    SparkDl4jMultiLayer sparkNet = new SparkDl4jMultiLayer(sc, conf, trainingMaster);


    log.log(Level.INFO,"==========after sparkMultilayer");
    //      Execute training:
    log.log(Level.INFO,"==========Starting training");
    for (int i = 0; i < 100; i++) {
        log.log(Level.INFO,"Epoch : " + i); // this is the Last line from my code that is printed in the Log
        sparkNet.fit(rddDataSetClassification); //it gets stuck here 
        log.log(Level.INFO,"Epoch : " + i + " / " + i);
    }
    log.log(Level.INFO,"after training");
    //      Dataset Evaluation
    Evaluation eval = sparkNet.evaluate(rddDataSetClassification);
    log.log(Level.INFO, eval.stats());

yarn-site.xml

<property>
            <name>yarn.acl.enable</name>
            <value>0</value>
    </property>

    <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>192.168.1.35</value>
    </property>

    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>

<property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>3072</value>
</property>

<property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>3072</value>
</property>

<property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>256</value>
</property>

<property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
</property>

spark-defult.conf:

spark.master        yarn
spark.driver.memory     2500m
spark.yarn.am.memory    2500m
spark.executor.memory   2000m
spark.eventLog.enabled      true
spark.eventLog.dir      hdfs://hadoop-MS-7A75:9000/spark-logs
spark.history.provider            org.apache.spark.deploy.history.FsHistoryProvider
spark.history.fs.logDirectory     hdfs://hadoop-MS-7A75:9000/spark-logs
spark.history.fs.update.interval  10s
spark.history.ui.port             18080

我怀疑任何资源问题,因此我尝试将诸如 spark.executor.cores 和s park.executor.instances 的属性设置为 1 < / strong> 我还尝试过改变纱线和火花上下的内存分配(我不确定它的工作原理)

spark.deploy.master .... out的日志

2019-08-23 20:18:33,669 INFO master.Master: I have been elected leader! New state: ALIVE
2019-08-23 20:18:40,771 INFO master.Master: Registering worker 192.168.1.37:42869 with 4 cores, 2.8 GB RAM

spark.deploy.worker .... out的日志

19/08/23 20:18:40 INFO Worker: Connecting to master hadoop-MS-7A75:7077...
19/08/23 20:18:40 INFO TransportClientFactory: Successfully created connection to hadoop-MS-7A75/192.168.1.35:7077 after 115 ms (0 ms spent in bootstraps)
19/08/23 20:18:40 INFO Worker: Successfully registered with master spark://hadoop-MS-7A75:7077

1 个答案:

答案 0 :(得分:0)

通过添加另一个从属设备解决了该问题 我不知道它为什么起作用,如何工作,但是当我添加另一个奴隶时,它起作用了