如何使用TensorFlow tf.train.string_input_producer生成多个epochs数据?

时间:2017-06-14 15:43:56

标签: python tensorflow neural-network

当我想使用tf.train.string_input_producer加载2个时期的数据时,我使用了

filename_queue = tf.train.string_input_producer(filenames=['data.csv'], num_epochs=2, shuffle=True)

col1_batch, col2_batch, col3_batch = tf.train.shuffle_batch([col1, col2, col3], batch_size=batch_size, capacity=capacity,\min_after_dequeue=min_after_dequeue, allow_smaller_final_batch=True)

但后来我发现这个操作并没有产生我想要的东西。

它只能在data.csv中生成每个样本2次,但生成的顺序并不清楚。例如,data.csv

中的3行数据
[[1]
[2]
[3]]

它会产生(每个样本只出现2次,但顺序是可选的)

[1]
[1]
[3]
[2]
[2]
[3]

但我想要的是(每个时代都是分开的,在每个时代都是随机播放)

(epoch 1:)
[1]
[2]
[3]
(epoch 2:)
[1]
[3]
[2]

另外,如何知道1个纪元何时完成?有一些标志变量吗?谢谢!

我的代码在这里。

import tensorflow as tf

def read_my_file_format(filename_queue):
    reader = tf.TextLineReader()
    key, value = reader.read(filename_queue)
    record_defaults = [['1'], ['1'], ['1']]  
    col1, col2, col3 = tf.decode_csv(value, record_defaults=record_defaults, field_delim='-')
    # col1 = list(map(int, col1.split(',')))
    # col2 = list(map(int, col2.split(',')))
    return col1, col2, col3

def input_pipeline(filenames, batch_size, num_epochs=1):
  filename_queue = tf.train.string_input_producer(
    filenames, num_epochs=num_epochs, shuffle=True)
  col1,col2,col3 = read_my_file_format(filename_queue)

  min_after_dequeue = 10
  capacity = min_after_dequeue + 3 * batch_size
  col1_batch, col2_batch, col3_batch = tf.train.shuffle_batch(
    [col1, col2, col3], batch_size=batch_size, capacity=capacity,
    min_after_dequeue=min_after_dequeue, allow_smaller_final_batch=True)
  return col1_batch, col2_batch, col3_batch

filenames=['1.txt']
batch_size = 3
num_epochs = 1
a1,a2,a3=input_pipeline(filenames, batch_size, num_epochs)

with tf.Session() as sess:
  sess.run(tf.local_variables_initializer())
  # start populating filename queue
  coord = tf.train.Coordinator()
  threads = tf.train.start_queue_runners(coord=coord)
  try:
    while not coord.should_stop():
      a, b, c = sess.run([a1, a2, a3])
      print(a, b, c)
  except tf.errors.OutOfRangeError:
    print('Done training, epoch reached')
  finally:
    coord.request_stop()

  coord.join(threads) 

我的数据就像

1,2-3,4-A
7,8-9,10-B
12,13-14,15-C
17,18-19,20-D
22,23-24,25-E
27,28-29,30-F
32,33-34,35-G
37,38-39,40-H

2 个答案:

答案 0 :(得分:10)

作为Nicolas observestf.train.string_input_producer() API无法检测到达纪元的结尾的时间;相反,它将所有时期连接成一个长批。出于这个原因,我们最近添加了(在TensorFlow 1.2中)tf.contrib.data API,这使得表达更复杂的流水线成为可能,包括您的用例。

以下代码段显示了如何使用tf.contrib.data编写程序:

import tensorflow as tf

def input_pipeline(filenames, batch_size):
    # Define a `tf.contrib.data.Dataset` for iterating over one epoch of the data.
    dataset = (tf.contrib.data.TextLineDataset(filenames)
               .map(lambda line: tf.decode_csv(
                    line, record_defaults=[['1'], ['1'], ['1']], field_delim='-'))
               .shuffle(buffer_size=10)  # Equivalent to min_after_dequeue=10.
               .batch(batch_size))

    # Return an *initializable* iterator over the dataset, which will allow us to
    # re-initialize it at the beginning of each epoch.
    return dataset.make_initializable_iterator() 

filenames=['1.txt']
batch_size = 3
num_epochs = 10
iterator = input_pipeline(filenames, batch_size)

# `a1`, `a2`, and `a3` represent the next element to be retrieved from the iterator.    
a1, a2, a3 = iterator.get_next()

with tf.Session() as sess:
    for _ in range(num_epochs):
        # Resets the iterator at the beginning of an epoch.
        sess.run(iterator.initializer)

        try:
            while True:
                a, b, c = sess.run([a1, a2, a3])
                print(a, b, c)
        except tf.errors.OutOfRangeError:
            # This will be raised when you reach the end of an epoch (i.e. the
            # iterator has no more elements).
            pass                 

        # Perform any end-of-epoch computation here.
        print('Done training, epoch reached')

答案 1 :(得分:2)

您可能希望查看此问题answer

短篇小说是:

  • 如果num_epochs> 1,所有数据都在同一时间排队,并且独立于纪元而结束,

  • 因此您无法监控哪个时代正在出列。

您可以做的是引用的答案中的第一个建议,即使用num_epochs == 1,并在每次运行中重新初始化本地队列变量(显然不是模型变量)。

init_queue = tf.variables_initializer(tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope='input_producer'))
with tf.Session() as sess: 
    sess.run(tf.global_variables_initializer())
    sess.run(tf.local_variables_initializer())
for e in range(num_epochs):
    with tf.Session() as sess:
       sess.run(init_queue) # reinitialize the local variables in the input_producer scope
       # start populating filename queue
       coord = tf.train.Coordinator()
       threads = tf.train.start_queue_runners(coord=coord)
       try:
           while not coord.should_stop():
               a, b, c = sess.run([a1, a2, a3])
               print(a, b, c)
       except tf.errors.OutOfRangeError:
           print('Done training, epoch reached')
       finally:
           coord.request_stop()

       coord.join(threads)