分布式TensorFlow:将ps和worker分配给同一台计算机时发生错误

时间:2017-02-09 14:11:41

标签: tensorflow distribute

我正在inception distributed training tutorial之后的2台计算机上运行分布式tensorflow:pc1pc2。我发现如果我使用一台机器作为ps而另一台作为工人,它运行良好。脚本如下:

# run worker on pc2
bazel-bin/inception/imagenet_distributed_train \
--batch_size=32 \
--data_dir=$HOME/imagenet-data \
--job_name='worker' \
--task_id=0 \
--ps_hosts='pc1:2222' \
--worker_hosts='pc2:2222'

# run ps on pc1
bazel-bin/inception/imagenet_distributed_train \
--job_name='ps' \
--task_id=0 \
--ps_hosts='pc1:2222' \
--worker_hosts='pc2:2222'

但是,如果我在每台机器上运行一个worker,即总共2个worker,并在这两台机器中的一台上运行ps,程序就会崩溃。脚本如下:

# run worker_1 on pc1
bazel-bin/inception/imagenet_distributed_train \
--batch_size=32 \
--data_dir=$HOME/imagenet-data \
--job_name='worker' \
--task_id=0 \
--ps_hosts='pc1:3333' \
--worker_hosts='pc1:2222,pc2:2222'

# run worker_2 on pc2
bazel-bin/inception/imagenet_distributed_train \
--batch_size=32 \
--data_dir=$HOME/imagenet-data \
--job_name='worker' \
--task_id=1 \
--ps_hosts='pc1:3333' \
--worker_hosts='pc1:2222,pc2:2222'

# run ps on pc1 on port: 3333
CUDA_VISIBLE_DEVICES='' bazel-bin/inception/imagenet_distributed_train \
--job_name='ps' \
--task_id=0 \
--ps_hosts='pc1:3333' \
--worker_hosts='pc2:2222'

worker_1上发生的错误如下:

Traceback (most recent call last):
  File "/home/AIJ/tf_models/models/inception/bazel-bin/inception/imagenet_distributed_train.runfiles/inception/inception/imagenet_distributed_train.py", line 65, in <module>
    tf.app.run()
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/home/AIJ/tf_models/models/inception/bazel-bin/inception/imagenet_distributed_train.runfiles/inception/inception/imagenet_distributed_train.py", line 61, in main
    inception_distributed_train.train(server.target, dataset, cluster_spec)
  File "/home/AIJ/tf_models/models/inception/bazel-bin/inception/imagenet_distributed_train.runfiles/inception/inception/inception_distributed_train.py", line 260, in train
    sess = sv.prepare_or_wait_for_session(target, config=sess_config)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/supervisor.py", line 719, in prepare_or_wait_for_session
    init_feed_dict=self._init_feed_dict, init_fn=self._init_fn)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/session_manager.py", line 256, in prepare_session
    config=config)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/session_manager.py", line 188, in _restore_checkpoint
    saver.restore(sess, ckpt.model_checkpoint_path)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/saver.py", line 1439, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/client/session.py", line 965, in _run
    feed_dict_string, options, run_metadata)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/client/session.py", line 1015, in _do_run
    target_list, options, run_metadata)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/client/session.py", line 1035, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key total_loss/avg not found in checkpoint
     [[Node: save/RestoreV2_1653 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:worker/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_1653/tensor_names, save/RestoreV2_1653/shape_and_slices)]]
     [[Node: _recv_save/Const_0_S1 = _Recv[client_terminated=false, recv_device="/job:ps/replica:0/task:0/cpu:0", send_device="/job:worker/replica:0/task:0/cpu:0", send_device_incarnation=39394642820946720, tensor_name="edge_5065__recv_save/Const_0", tensor_type=DT_STRING, _device="/job:ps/replica:0/task:0/cpu:0"]()]]

Caused by op u'save/RestoreV2_1653', defined at:
  File "/home/AIJ/tf_models/models/inception/bazel-bin/inception/imagenet_distributed_train.runfiles/inception/inception/imagenet_distributed_train.py", line 65, in <module>
    tf.app.run()
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/home/AIJ/tf_models/models/inception/bazel-bin/inception/imagenet_distributed_train.runfiles/inception/inception/imagenet_distributed_train.py", line 61, in main
    inception_distributed_train.train(server.target, dataset, cluster_spec)
  File "/home/AIJ/tf_models/models/inception/bazel-bin/inception/imagenet_distributed_train.runfiles/inception/inception/inception_distributed_train.py", line 233, in train
    saver = tf.train.Saver()
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/saver.py", line 1051, in __init__
    self.build()
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/saver.py", line 1081, in build
    restore_sequentially=self._restore_sequentially)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/saver.py", line 675, in build
    restore_sequentially, reshape)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/saver.py", line 402, in _AddRestoreOps
    tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/training/saver.py", line 242, in restore_op
    [spec.tensor.dtype])[0])
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/ops/gen_io_ops.py", line 668, in restore_v2
    dtypes=dtypes, name=name)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/framework/ops.py", line 2395, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/AIJ/tensorflow/_python_build/tensorflow/python/framework/ops.py", line 1264, in __init__
    self._traceback = _extract_stack()

NotFoundError (see above for traceback): Key total_loss/avg not found in checkpoint
     [[Node: save/RestoreV2_1653 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:worker/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_1653/tensor_names, save/RestoreV2_1653/shape_and_slices)]]
     [[Node: _recv_save/Const_0_S1 = _Recv[client_terminated=false, recv_device="/job:ps/replica:0/task:0/cpu:0", send_device="/job:worker/replica:0/task:0/cpu:0", send_device_incarnation=39394642820946720, tensor_name="edge_5065__recv_save/Const_0", tensor_type=DT_STRING, _device="/job:ps/replica:0/task:0/cpu:0"]()]]

我还尝试在不同端口上的一台计算机上使用1个worker和1 ps,但错误仍然存​​在。

所以我的问题是:这是否意味着必须将ps和worker分配给不同的机器,即使它们被分配到不同的端口?将一台机器同时用作worker和ps的原因是为了最大化计算资源的利用率。任何人都可以告诉我如何使用一台机器作为工人和ps吗?

0 个答案:

没有答案