Tensorflow数据集/迭代器用于评估CNN中的列车和测试数据

时间:2018-04-17 11:05:23

标签: tensorflow machine-learning neural-network deep-learning conv-neural-network

设置:我想通过在回归设置中训练输入批次的CNN来预测值。我还想在每个时期之后评估和计算损失,因此我需要在运行时在数据集之间切换。

Input: [num_examples, height, width, channels] -> [num_examples, y]

我想使用新的数据集API,因为我想避免在培训期间自己喂食批次。

想要将我的数据集存储在计算图中,因为数据集大于2GB,小到足以存储在内存中。

这是我目前的设置:

def initialize_datasets(x, y,...):
    dataset_train = tf.data.Dataset.from_tensor_slices((x, y))
    dataset_train = dataset_train.apply(tf.contrib.data.shuffle_and_repeat(buffer_size=examples_train, count=epochs))
    dataset_train = dataset_train.batch(batch_size)

    dataset_test = tf.data.Dataset.from_tensor_slices((x, y))
    dataset_test = dataset_test.apply(tf.contrib.data.shuffle_and_repeat(buffer_size=examples_test, count=-1))
    dataset_test = dataset_test.batch(batch_size)

    # Iterator 
    iterator_train = dataset_train.make_initializable_iterator()
    iterator_test = dataset_test.make_initializable_iterator()

    return iterator_train, iterator_test


def get_input_batch_data(testing, iterator_train, iterator_test):
    features, labels = tf.cond(testing, lambda: iterator_test.get_next(), lambda: iterator_train.get_next())
    return features, labels

然后在my model()函数:

#1
iterator_train, iterator_test = initialize_datasets(x, y, ...)
#2
features, labels = get_input_batch_data(testing, iterator_train, 
iterator_test)

# forward pass, loss, etc
...

with tf.Session as sess:
   #initialize with train data, trainX[num_examples, height, width, channels]
    sess.run(iterator_train.initializer, feed_dict={x: trainX, y: trainY, 
    batch_size: batchsize})

   #initialize with test data
    sess.run(iterator_test.initializer, feed_dict={x: testX, y: testY, 
    batch_size: NUM_EXAMPLES_TEST})

for i in range(EPOCHS)
    for j in range(NUM_BATCHES)
        _, batch_loss = sess.run([train_step, loss], feed_dict={testing: 
              False,  i: iters_total, pkeep: p_keep})   

    # after 1 epoch, calculate loss and whole test data set
    epoch_test_loss = sess.run(loss, feed_dict={testing: True, i: 
                    iters_total, pkeep: 1}) 

这是输出:

Iter: 44, Epoch: 0 (8.46s), Train-Loss: 103011.18, Test-Loss: 100162.34
Iter: 89, Epoch: 1 (4.17s), Train-Loss: 93699.51, Test-Loss: 92130.21
Iter: 134, Epoch: 2 (4.13s), Train-Loss: 90217.82, Test-Loss: 88978.74
Iter: 179, Epoch: 3 (4.14s), Train-Loss: 88503.13, Test-Loss: 87515.81
Iter: 224, Epoch: 4 (4.18s), Train-Loss: 87336.62, Test-Loss: 86486.40
Iter: 269, Epoch: 5 (4.10s), Train-Loss: 86388.38, Test-Loss: 85637.64
Iter: 314, Epoch: 6 (4.14s), Train-Loss: 85534.52, Test-Loss: 84858.43
Iter: 359, Epoch: 7 (4.29s), Train-Loss: 84693.19, Test-Loss: 84074.78
Iter: 404, Epoch: 8 (4.20s), Train-Loss: 83973.64, Test-Loss: 83314.47
Iter: 449, Epoch: 9 (4.40s), Train-Loss: 83149.73, Test-Loss: 82541.73

问题:

  • 此输出向我表明我的数据集管道不起作用,因为测试损失是根据列车数据计算的,反之亦然,因为这些损失彼此太接近
  • 我将使用哪种迭代器和数据集来执行此任务?

我还在这里上传了整个模型:https://github.com/toemm/TF-CNN-regression/blob/master/BA-CNN_so.ipynb

1 个答案:

答案 0 :(得分:2)

一个明显的答案是:您不希望在相同图内执行此操作,因为评估图与训练图不同。

  • 退出具有固定的乘数(无采样)
  • BatchNorm使用累积的统计信息,并且更新EMA

所以解决方案实际上是构建两个不同的东西,例如

import numpy as np
import tensorflow as tf


X_train = tf.constant(np.ones((100, 2)), 'float32')
X_val = tf.constant(np.zeros((10, 2)), 'float32')

iter_train = tf.data.Dataset.from_tensor_slices(
    X_train).make_initializable_iterator()
iter_val = tf.data.Dataset.from_tensor_slices(
    X_val).make_initializable_iterator()


def graph(x, is_train=True):
  return x


output_train = graph(iter_train.get_next(), is_train=True)
output_val = graph(iter_val.get_next(), is_train=False)

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  sess.run(iter_train.initializer)
  sess.run(iter_val.initializer)

  for train_iter in range(100):
    print(sess.run(output_train))

  for train_iter in range(10):
    print(sess.run(output_val))