validation_batch_size等于训练CNN中的train_batch_size?

时间:2017-08-29 03:25:11

标签: tensorflow deep-learning conv-neural-network

我想以最高的准确度保存模型,我需要在每一步中采取一批验证数据进行验证,每一步训练后,训练数据集将因纪元而重复使用,但如果 train_batch_size 等于 validation_batch_size,验证数据集也会重复使用?因为验证数据集远小于训练数据集。我该怎么办?我的意思是重用验证集没有任何问题?或者我分别设置不同的尺寸。

MAX_EPOCH = 10
for epoch in range(MAX_EPOCH):
    # training
    train_step = int(80000 / TRAIN_BATCH_SIZE)
    train_loss, train_acc = 0, 0
    for step in range(epoch * train_step, (epoch + 1) * train_step):
        x_train, y_train = sess.run([x_train_batch, y_train_batch])
        train_summary, _, err, ac = sess.run([merged, train_op, loss, acc],
                                             feed_dict={x: x_train, y_: y_train,
                                                        mode: learn.ModeKeys.TRAIN,
                                                        global_step: step})
        train_loss += err
        train_acc += ac
        if (step + 1) % 100 == 0:
            train_writer.add_summary(train_summary, step)
    print("Epoch %d,train loss= %.2f,train accuracy=%.2f%%" % (
        epoch, (train_loss / train_step), (train_acc / train_step * 100.0)))

    # validation
    val_step = int(20000 / VAL_BATCH_SIZE)
    val_loss, val_acc = 0, 0
    for step in range(epoch * val_step, (epoch + 1) * val_step):
        x_val, y_val = sess.run([x_val_batch, y_val_batch])
        val_summary, err, ac = sess.run([merged, loss, acc],
                                        feed_dict={x: x_val, y_: y_val, mode: learn.ModeKeys.EVAL,
                                                   global_step: step})
        val_loss += err
        val_acc += ac
        if (step + 1) % 100 == 0:
            valid_writer.add_summary(val_summary, step)
    print(
        "Epoch %d,validation loss= %.2f,validation accuracy=%.2f%%" % (
            epoch, (val_loss / val_step), (val_acc / val_step * 100.0)))

1 个答案:

答案 0 :(得分:0)

评估期间可以使用不同的批量大小。

话虽如此,每次评估模型时都应使用相同的验证集。否则,结果会增加/减少,因为与之前的评估相比,您评估的示例本质上更容易/更困难。