Tensorflow数据集重复和模型准确性

时间:2018-04-01 16:23:53

标签: tensorflow tensorflow-datasets

我有一个我想要训练的数据集。我想从每个批次的整个数据集中选择随机行。我不想一遍又一遍地重复完全相同的批次。

我遇到的问题是每次运行代码时,准确度从低开始然后增加。我认为这是因为它是每次运行的单批次培训。如果是对完整数据集进行培训,则每次运行程序时都不会重置准确性。我错了。可能是我的模型没有被保存但是我保存/恢复它。

运行1

Iter= 2000, Average Loss= 0.105903, Average Accuracy= 79.21%
Iter= 4000, Average Loss= 0.090152, Average Accuracy= 73.22%
Iter= 6000, Average Loss= 0.100107, Average Accuracy= 85.10%
Iter= 8000, Average Loss= 0.106910, Average Accuracy= 95.63%

运行2

Iter= 2000, Average Loss= 0.105059, Average Accuracy= 81.15%
Iter= 4000, Average Loss= 0.105170, Average Accuracy= 92.25%
Iter= 6000, Average Loss= 0.106881, Average Accuracy= 95.68%

运行3

Iter= 2000, Average Loss= 0.102585, Average Accuracy= 79.52%
Iter= 4000, Average Loss= 0.079520, Average Accuracy= 75.09%
Iter= 6000, Average Loss= 0.077820, Average Accuracy= 73.63%

代码

dataset = tf.data.TFRecordDataset(input_tfrecords)
dataset = dataset.map(parse)
dataset = dataset.shuffle(buffer_size=100)
dataset = dataset.batch(batch_size)
dataset = dataset.repeat()
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()

saver.restore(session, model_location)

while step < training_iters:
    features, one_hot_labels = session.run(next_element)
    _, acc, loss, logits = session.run([optimizer, accuracy, cost, pred], feed_dict={x: features, y: one_hot_labels})

    loss_total += loss
    acc_total += acc
    if (step+1) % display_step == 0:
        saver.save(session, model_location)
        print("Iter= " + str(step+1) + ", Average Loss= " + \
              "{:.6f}".format(loss_total/display_step) + ", Average Accuracy= " + \
              "{:.2f}%".format(100*acc_total/display_step))
        acc_total = 0
        loss_total = 0
    step += 1

1 个答案:

答案 0 :(得分:1)

我认为你描述的内容没有任何问题。当然,随着网络学习其准确性的提高。您的代码也可以在运行之间正确保存和恢复网络。第三次运行可能过度拟合,或者您的学习率可能过高,这会使模型发散或振荡。不确定您是否使用学习率退火?另外,为清楚起见,我通常也会恢复训练迭代次数。