训练期间增加用于测试队列的TensorFlow LSTM迭代器

时间:2019-03-06 10:48:53

标签: python tensorflow queue lstm recurrent-neural-network

我在Tensorflow中有一个LSTM,它使用队列来区分训练数据和测试数据。

结构如下:

# Queue for Trainingdata
iter_train = tf.data.Dataset.range(epochNum_train).repeat().make_one_shot_iterator().get_next()

input_train_queue = input_train[:, iter_train * num_steps : (iter_train + 1) * num_steps, :]
input_train_queue.set_shape([batch_size, num_steps, input_size])

output_train_queue = output_train[:, iter_train * num_steps: (iter_train + 1) * num_steps, :]
output_train_queue.set_shape([batch_size, num_steps, input_size])

# Queue for Testdata
iter_test = tf.data.Dataset.range(epochNum_test).repeat().make_one_shot_iterator().get_next()

input_test_queue = input_test[:, iter_test * num_steps : (iter_test + 1) * num_steps, :]
input_test_queue.set_shape([batch_size, num_steps, input_size])

output_test_queue = output_test[:, iter_test * num_steps: (iter_test + 1) * num_steps, :]
output_test_queue.set_shape([batch_size, num_steps, input_size])   

# tf.cond for the selection of data
rnn_outputs, _ = tf.nn.dynamic_rnn(cell, tf.cond(useTestData, lambda: input_test_queue, lambda: input_train_queue),
                                      dtype=tf.float32, initial_state=init_state)
error = tf.reduce_mean(tf.squared_difference(rnn_outputs, tf.cond(useTestData, lambda: output_test_queue, lambda: output_train_queue)))
train_fn = tf.train.AdamOptimizer(learning_rate=0.01).minimize(error)

我的问题是,将训练数据提供给LSTM时,iter_test也会增加:

t1 = sess.run(iter_test) # t1 has the value 0 
sess.run(train_fn, {useTestData: False})
t2 = sess.run(iter_test) # t2 has the value 2 
t3 = sess.run(iter_test) # t3 has the value 3 

为什么训练期间iter_test会增加?并且有解决该问题的方法,这样iter_test在训练过程中不会改变吗?

0 个答案:

没有答案