我有如下所示的Python代码test.py
,它使用" Between-graph Replication" for Distributed Tensorflow:
import argparse
import logging
import tensorflow as tf
log = logging.getLogger(__name__)
# Job Names
PARAMETER_SERVER = "ps"
WORKER_SERVER = "worker"
# Cluster Details
CLUSTER_SPEC = {
PARAMETER_SERVER: ["localhost:2222"],
WORKER_SERVER: ["localhost:1111", "localhost:1112"]}
def parse_command_arguments():
""" Set up and parse the command line arguments passed for experiment. """
parser = argparse.ArgumentParser(
description="Parameters and Arguments for the Test.")
parser.add_argument(
"--job_name",
type=str,
default="",
help="One of 'ps', 'worker'"
)
# Flags for defining the tf.train.Server
parser.add_argument(
"--task_index",
type=int,
default=0,
help="Index of task within the job"
)
return parser.parse_args()
def start_server(job_name, task_index):
""" Create a server based on a cluster spec. """
cluster = tf.train.ClusterSpec(CLUSTER_SPEC)
server = tf.train.Server(
cluster, job_name=job_name, task_index=task_index)
return server, cluster
def model():
""" Build up a simple estimator model. """
# Build a linear model and predict values
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
global_step = tf.get_variable('global_step', [],
initializer=tf.constant_initializer(0),
trainable=False)
# Loss sub-graph
loss = tf.reduce_sum(tf.square(linear_model - y))
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss, global_step=global_step)
init_op = tf.global_variables_initializer()
log.info("Variables initialized ...")
return W, b, loss, x, y, train, global_step, init_op
if __name__ == "__main__":
# Initializing logging with level "INFO".
logging.basicConfig(level=logging.INFO)
# Parse arguments from command line.
arguments = parse_command_arguments()
job_name = arguments.job_name
task_index = arguments.task_index
# Start a server.
server, cluster = start_server(job_name, task_index)
if job_name == "ps":
server.join()
else:
with tf.device(tf.train.replica_device_setter(
worker_device="/job:worker/task:%d" % task_index,
cluster=cluster)):
W, b, loss, x, y, train, global_step, init_op = model()
with tf.train.MonitoredTrainingSession(
master=server.target,
is_chief=(arguments.task_index == 0 and (
arguments.job_name == 'worker'))) as sess:
step = 0
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
while not sess.should_stop() and step < 1000:
_, step = sess.run(
[train, global_step], {x: x_train, y: y_train})
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run(
[W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s" % (curr_W, curr_b, curr_loss))
我按照以下顺序在一台机器(只有CPU的MacPro)中运行3个不同进程的代码:
$ python test.py --task_index 0 --job_name ps
$ python test.py --task_index 0 --job_name worker
$ python test.py --task_index 1 --job_name worker
我发现&#34;工人2&#34;遇到错误:
$ python test.py --task_index 1 --job_name worker
I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:197] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222}
I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:197] Initialize GrpcChannelCache for job worker -> {0 -> localhost:1111, 1 -> localhost:1112}
I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:211] Started server with target: grpc://localhost:1112
INFO:__main__:Variables initialized ...
I tensorflow/core/distributed_runtime/master_session.cc:993] Start master session 9912c75f2921fe13 with config:
INFO:tensorflow:Waiting for model to be ready. Ready_for_local_init_op: None, ready: Variables not initialized: Variable, Variable_1, global_step
INFO:tensorflow:Waiting for model to be ready. Ready_for_local_init_op: None, ready: Variables not initialized: Variable, Variable_1, global_step
&#34;工人2&#34;刚冻结在那里。该错误显示&#34;工人2&#34;的Tensorflow变量。未成功初始化,所以我想知道在{Tensorflow Sessions或其他地方协调变量初始化方面是否存在MonitoredTrainingSession
的错误,或者我错过了我的代码中的内容。
NOTE: The code was running with Tensorflow 0.12
答案 0 :(得分:7)
我认为这是&#34;预期的行为&#34;用于tf.train.MonitoredTrainingSession
协调协议。在recent answer中,我解释了这个协议如何适应长期训练工作,因此工作人员在检查变量是否已经初始化之间会睡30秒。
在运行初始化操作的Worker 1和检查变量的Worker 2之间存在竞争条件,并且如果Worker 2&#34;赢得&#34;在比赛中,它会观察到一些变量未初始化,并且在再次检查之前它将进入30秒的睡眠状态。
但是,程序中的总计算量非常小,因此在这30秒内,Worker 1将能够完成其工作并终止。当Worker 2检查变量是否已初始化时,它将创建一个新的tf.Session
,尝试连接到其他任务,但Worker 1不再运行,因此您将看到这样的日志消息(重复每隔10秒左右):
I tensorflow/core/distributed_runtime/master.cc:193] CreateSession still waiting for response from worker: /job:worker/replica:0/task:0
当训练工作远远超过30秒时,这不会是一个问题。
一种解决方法是通过设置&#34;设备过滤器&#34;来消除工作人员之间的相互依赖性。由于在典型的图形间配置中,各个工作人员不进行通信,因此您可以使用tf. ConfigProto
告诉TensorFlow在会话创建时忽略另一个工作人员的缺席:
# Each worker only needs to contact the PS task(s) and the local worker task.
config = tf.ConfigProto(device_filters=[
'/job:ps', '/job:worker/task:%d' % arguments.task_index])
with tf.train.MonitoredTrainingSession(
master=server.target,
config=config,
is_chief=(arguments.task_index == 0 and (
arguments.job_name == 'worker'))) as sess:
# ...