tf.TFRecordReader只返回1个纪元

时间:2017-02-26 12:29:49

标签: python tensorflow

我正在尝试尽快评估模型。我从一个唯一的TFRecords文件中得到我的例子,它似乎非常慢,所以我在这里搜索任何解释,我找到了Yaroslav Bulatov(https://github.com/yaroslavvb/stuff/blob/master/ericyue-slowreader/benchmark.py)的示例代码。

我已经用tf.train.batch替换了tf.train.shuffle_batch调用,因为我只需要阅读1个纪元,如果样本被洗牌,我不介意。当enqueue_many = False时,结果是正确的,但是,当我尝试使用2个入队项目的enqueue_many = True时,我得到了相同的样本重复。

关键代码在这里:

    reader = tf.TFRecordReader()
    queue_batch = []
    for i in range(enqueue_many_size):
        _, serialized_example = reader.read(filename_queue)
        queue_batch.append(serialized_example)
    batch_serialized_example = tf.train.batch(
        [queue_batch],
        batch_size=batch_size,
        num_threads=thread_number,
        capacity=capacity,
        enqueue_many=True)

完整的概念证明就在这里:

import glob
import time
import numpy as np
import os
import tensorflow as tf

epoch_number = 1
thread_number = 1
batch_size = 4
capacity = thread_number * batch_size + 10
enqueue_many = True
enqueue_many_size = 2

# Just in case that you want to generate my set of samples
def generateNumbersTFRecords(directory, num_elements):
    record_filename = os.path.join(directory, 'vectors.tfrecords')
    writer = tf.python_io.TFRecordWriter(record_filename)
    for i in range(num_elements):
        vector = np.arange(i*16,(i+1)*16, dtype=np.float32)
        feature = {'vector': tf.train.Feature(float_list=tf.train.FloatList(value=vector.tolist()))}
        example = tf.train.Example(features=tf.train.Features(feature=feature))
        writer.write(example.SerializeToString())
    writer.close()


filename_queue = tf.train.string_input_producer(
      ["vectors.tfrecords"],
      shuffle=False,
      seed = int(time.time()),
      num_epochs=epoch_number)

def read_and_decode(filename_queue):
    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)
    return serialized_example

if enqueue_many:
    reader = tf.TFRecordReader()
    queue_batch = []
    for i in range(enqueue_many_size):
        _, serialized_example = reader.read(filename_queue)
        queue_batch.append(serialized_example)
    batch_serialized_example = tf.train.batch(
        [queue_batch],
        batch_size=batch_size,
        num_threads=thread_number,
        capacity=capacity,
        enqueue_many=True)

else:
    serialized_example = read_and_decode(filename_queue)
    batch_serialized_example = tf.train.batch(
        [serialized_example],
        batch_size=batch_size,
        num_threads=thread_number,
        capacity=capacity)

features = tf.parse_example(
    batch_serialized_example,
    features={
        "vector": tf.FixedLenFeature([16], tf.float32),
    })


batch_values = features["vector"]

init_op = tf.global_variables_initializer()

sess = tf.Session()

sess.run(init_op)
sess.run(tf.local_variables_initializer())

coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord, sess=sess)

try:
    while not coord.should_stop():
        f1 = sess.run([batch_values])
        print(f1)


except tf.errors.OutOfRangeError:
    print("Done training after reading all data")
finally:
    coord.request_stop()
    print("coord stopped")

coord.join(threads)

在enqueue_many上下文中使用时,我想阻止对读者的两次调用返回相同的TFRecord。预期的行为是顺序向量[[0,​​1,2,3 ... 15],[16,17 ...] ...],但我得[[0,1,2,3 ... 15],[0,1,2,3 ... 15],[16,17 ...],[16,17 ...] ...

我的输出是:

[array([[  0.,   1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.,
     11.,  12.,  13.,  14.,  15.],
   [  0.,   1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.,
     11.,  12.,  13.,  14.,  15.],
   [ 16.,  17.,  18.,  19.,  20.,  21.,  22.,  23.,  24.,  25.,  26.,
     27.,  28.,  29.,  30.,  31.],
   [ 16.,  17.,  18.,  19.,  20.,  21.,  22.,  23.,  24.,  25.,  26.,
     27.,  28.,  29.,  30.,  31.]], dtype=float32)]
[array([[ 32.,  33.,  34.,  35.,  36.,  37.,  38.,  39.,  40.,  41.,  42.,
     43.,  44.,  45.,  46.,  47.],
   [ 32.,  33.,  34.,  35.,  36.,  37.,  38.,  39.,  40.,  41.,  42.,
     43.,  44.,  45.,  46.,  47.],
   [ 48.,  49.,  50.,  51.,  52.,  53.,  54.,  55.,  56.,  57.,  58.,
     59.,  60.,  61.,  62.,  63.],
   [ 48.,  49.,  50.,  51.,  52.,  53.,  54.,  55.,  56.,  57.,  58.,
     59.,  60.,  61.,  62.,  63.]], dtype=float32)]

1 个答案:

答案 0 :(得分:0)

我把这个问题作为tensorflow的github中的一个问题,幸运的是@yaroslavvb迅速回答并给了我解决方案。万一你被困在我所在的地方,你必须知道问题与优化选项有关。这是TF 1.0的已知错误,它已在主分支中解决。

您可以在此处找到更多信息:https://github.com/tensorflow/tensorflow/issues/7916