Tensorflow Cloud ML对象检测 - 分布式培训的错误

时间:2018-05-24 14:34:38

标签: python tensorflow google-cloud-platform

我正在尝试使用Tensorflow的distributed training我自己的模型的对象检测教程,但我正在使用与the repository完全相同的代码。

我从教程中做了一些更改,特别是使用运行时1.5而不是1.2,如教程中所述。当我尝试在Google Cloud ML上运行时,没有任何明显的错误(我可以看到),但是该任务会在没有经过培训的情况下快速退出。

这是我用来开始培训工作的命令:

gcloud ml-engine jobs submit training object_detection_`date +%s`
    --job-dir=gs://test-bucket/training/
    --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz
    --module-name object_detection.train
    --region us-central1
    --config ./config.yaml
    --
    --train_dir=gs://test-bucket/data/
    --pipeline_config_path=gs://test-bucket/configs/ssd_inception_v2_coco.config

这是我的config.yaml:

trainingInput:
  runtimeVersion: "1.5"
  scaleTier: CUSTOM
  masterType: complex_model_l
  workerCount: 9
  workerType: standard_gpu
  parameterServerCount: 3
  parameterServerType: large_model

最后我的工作日志结束了:

I  worker-replica-6 Clean up finished.  worker-replica-6
I  worker-replica-7 Signal 15 (SIGTERM) was caught. Terminated by service. This is normal behavior.  worker-replica-7
I  worker-replica-7 Module completed; cleaning up.  worker-replica-7
I  worker-replica-7 Clean up finished.  worker-replica-7
I  worker-replica-8 Signal 15 (SIGTERM) was caught. Terminated by service. This is normal behavior.  worker-replica-8
I  worker-replica-8 Module completed; cleaning up.  worker-replica-8
I  worker-replica-8 Clean up finished.  worker-replica-8
I  worker-replica-1 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-1
I  worker-replica-1 Signal 15 (SIGTERM) was caught. Terminated by service. This is normal behavior.  worker-replica-1
I  worker-replica-1 Module completed; cleaning up.  worker-replica-1
I  worker-replica-1 Clean up finished.  worker-replica-1
I  worker-replica-7 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-7
I  worker-replica-8 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-8
I  worker-replica-6 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-6
I  worker-replica-3 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-3
I  worker-replica-0 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-0
I  worker-replica-2 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-2
I  worker-replica-5 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-5
I  worker-replica-1 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-1
I  worker-replica-7 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-7
I  worker-replica-8 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-8
I  worker-replica-6 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-6
I  worker-replica-3 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-3
I  worker-replica-0 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-0
I  worker-replica-2 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-2
I  worker-replica-5 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-5
I  worker-replica-1 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-1
I  worker-replica-7 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-7
I  worker-replica-8 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-8
I  worker-replica-6 CreateSession still waiting for response from worker: /job:master/replica:0/task:0  worker-replica-6
I  Finished tearing down TensorFlow. 
I  Job failed.

正如我所提到的,我无法从日志中获得有用的东西。我得到了这个错误Master init: Unavailable: Stream removed,但我不确定如何处理这个问题。感谢任何正确方向的推动!

1 个答案:

答案 0 :(得分:0)

我转载了您的问题。我按照以下步骤修复了它:

  

roysheffi在3个月前对此问题发表了评论。嗨@pkulzc,我想我   可能有线索:

     

在第357行,object_detection / trainer.py调用   tf.contrib.slim.learning.train()使用了已弃用   tf.train.Supervisor和应该迁移到   改为tf.train.MonitoredTrainingSession,如   tf.train.Supervisor

     

这已经在tensorflow / tensorflow#15793中请求了并且是   在上一个报告为tensorflow / tensorflow#17852的解决方案   yahoo / TensorFlowOnSpark#245的评论。 [1]

因此,最后,我在trainer.py中完成了此操作:

  • 放入tf.train.MonitoredTrainingSession(而不是slim.learning.train(