在我的代码中,我正在尝试练习使用tr.train.batch函数。在sess.run([optimizer])
行中,它不返回任何内容,它只是被冻结。你能找到我的错误吗?
tensors = tf.convert_to_tensor(x_train, dtype=tf.float32)
tensors = tf.reshape(tensors, shape=x_train.shape)
batch = tf.train.batch([tensors], batch_size=BATCH_SIZE, enqueue_many=True)
# Weights and biases to hidden layer
Wh = tf.Variable(tf.random_normal([COLUMN-2, UNITS_OF_HIDDEN_LAYER], mean=0.0, stddev=0.05))
bh = tf.Variable(tf.zeros([UNITS_OF_HIDDEN_LAYER]))
h = tf.nn.tanh(tf.matmul(batch, Wh) + bh)
# Weights and biases to output layer
Wo = tf.transpose(Wh) # tied weights
bo = tf.Variable(tf.zeros([COLUMN-2]))
y = tf.nn.tanh(tf.matmul(h, Wo) + bo)
# Objective functions
mean_sqr = tf.reduce_mean(tf.pow(batch - y, 2))
optimizer = tf.train.AdamOptimizer(LEARNING_RATE).minimize(mean_sqr)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for j in range(TRAINING_EPOCHS):
sess.run([optimizer])
print("optimizer: ")
答案 0 :(得分:0)
tf.train.start_queue_runners
是一个队列,因此您需要使用with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
try:
# Training loop
for j in range(TRAINING_EPOCHS):
if coord.should_stop():
break
sess.run([optimizer])
print("optimizer: ")
except Exception, e:
# When done, ask the threads to stop.
coord.request_stop(e)
finally:
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
在会话中启动队列。您可以在tensorflow的threading and Queues guide中了解这一点。
进行以下更改:
Array ( [0] => Array ( [0] => 2011-11-12 [1] => 1963-02-29 [3] => 2029-05-14 [9] => 1812-08-12 [11] => 1537-05-17 [16] => 2005-17-04 [30] => 3000-42-99 ) )