我的数据位于tfrecords
个文件中。这个简单的代码使用tf.data.Dataset
api迭代和批处理图像。然而,每100批次的计算时间增加。为什么会如此以及如何解决这个问题?
import tensorflow as tf
import time
sess = tf.Session()
dataset = tf.data.TFRecordDataset('/tmp/data/train.tfrecords')
dataset = dataset.repeat()
dataset = dataset.batch(3)
iterator = dataset.make_one_shot_iterator()
prev_step = time.time()
for step in range(10000):
tensors = iterator.get_next()
fetches = sess.run(tensors)
if step % 200 == 0:
print("Step %6i time since last %7.5f" % (step, time.time() - prev_step))
prev_step = time.time()
输出以下时间:
Step 0 time since last 0.01432
Step 200 time since last 1.85303
Step 400 time since last 2.15448
Step 600 time since last 2.65473
Step 800 time since last 3.15646
Step 1000 time since last 3.72434
Step 1200 time since last 4.34447
Step 1400 time since last 5.11210
Step 1600 time since last 5.87102
Step 1800 time since last 6.61459
Step 2000 time since last 7.57238
Step 2200 time since last 8.33060
Step 2400 time since last 9.37795
tfrecords文件包含用this HowTo from the Tensorflow doc's
编写的MNIST图像为了缩小问题范围,我复制了代码以从磁盘读取原始图像。在这种情况下,每200批次的时间保持不变。
现在我的问题是:
解决!
回答我自己的问题:将get_next()
移到循环之外
答案 0 :(得分:3)
解决:将get_next()
移出循环