在我的问题中,我需要从每个训练步骤的数据中用1个例子运行GD。已知的问题是session.run()有开销,因此训练模型的时间太长。 为了避免开销,我试图在一次run()调用的情况下对所有数据使用while_loop和train模型。但它不会工作,train_op甚至不会执行。以下是我正在做的简单示例:
data = [k*1. for k in range(10)]
tf.reset_default_graph()
i = tf.Variable(0, name='loop_i')
q_x = tf.FIFOQueue(100000, tf.float32)
q_y = tf.FIFOQueue(100000, tf.float32)
x = q_x.dequeue()
y = q_y.dequeue()
w = tf.Variable(0.)
b = tf.Variable(0.)
loss = (tf.add(tf.mul(x, w), b) - y)**2
gs = tf.Variable(0)
train_op = tf.train.GradientDescentOptimizer(0.05).minimize(loss, global_step=gs)
s = tf.Session()
s.run(tf.initialize_all_variables())
def cond(i):
return i < 10
def body(i):
return tf.tuple([tf.add(i, 1)], control_inputs=[train_op])
loop = tf.while_loop(cond, body, [i])
for _ in range(1):
s.run(q_x.enqueue_many((data, )))
s.run(q_y.enqueue_many((data, )))
s.run(loop)
s.close()
我做错了什么?或者这个问题的另一个解决方案是开销过于昂贵?
谢谢!
答案 0 :(得分:19)
模型似乎没有训练的原因是因为输入读数,梯度计算和minimize()
调用都在外部定义(因此,在数据流术语中, 之前)tf.while_loop()
的正文。这意味着模型的所有这些部分仅在循环执行之前运行一次,并且循环本身不起作用。
稍微重构 - 在循环内移动dequeue()
操作,渐变计算和minimize()
调用 - 修复问题并允许程序进行训练:
optimizer = tf.train.GradientDescentOptimizer(0.05)
def cond(i):
return i < 10
def body(i):
# Dequeue a new example each iteration.
x = q_x.dequeue()
y = q_y.dequeue()
# Compute the loss and gradient update based on the current example.
loss = (tf.add(tf.mul(x, w), b) - y)**2
train_op = optimizer.minimize(loss, global_step=gs)
# Ensure that the update is applied before continuing.
return tf.tuple([tf.add(i, 1)], control_inputs=[train_op])
loop = tf.while_loop(cond, body, [i])
UPDATE:这是一个完整的程序,根据你问题中的代码执行while循环:
import tensorflow as tf
# Define a single queue with two components to store the input data.
q_data = tf.FIFOQueue(100000, [tf.float32, tf.float32])
# We will use these placeholders to enqueue input data.
placeholder_x = tf.placeholder(tf.float32, shape=[None])
placeholder_y = tf.placeholder(tf.float32, shape=[None])
enqueue_data_op = q_data.enqueue_many([placeholder_x, placeholder_y])
gs = tf.Variable(0)
w = tf.Variable(0.)
b = tf.Variable(0.)
optimizer = tf.train.GradientDescentOptimizer(0.05)
# Construct the while loop.
def cond(i):
return i < 10
def body(i):
# Dequeue a single new example each iteration.
x, y = q_data.dequeue()
# Compute the loss and gradient update based on the current example.
loss = (tf.add(tf.multiply(x, w), b) - y) ** 2
train_op = optimizer.minimize(loss, global_step=gs)
# Ensure that the update is applied before continuing.
with tf.control_dependencies([train_op]):
return i + 1
loop = tf.while_loop(cond, body, [tf.constant(0)])
data = [k * 1. for k in range(10)]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(1):
# NOTE: Constructing the enqueue op ahead of time avoids adding
# (potentially many) copies of `data` to the graph.
sess.run(enqueue_data_op,
feed_dict={placeholder_x: data, placeholder_y: data})
print (sess.run([gs, w, b])) # Prints before-loop values.
sess.run(loop)
print (sess.run([gs, w, b])) # Prints after-loop values.