尽管验证和训练集不同,但TensorFlow验证和训练精度相同

时间:2019-03-28 20:21:56

标签: python tensorflow

我尝试使用卷积神经网络来输出验证准确性,使用测试数据集作为验证数据(尽管我确实意识到我通常会使用单独的验证集)。现在,尽管两个数据集不同,但训练数据acc和验证数据val_acc的准确性是相同的。

这是代码(从https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l04c01_image_classification_with_cnns.ipynb进行了调整)

!pip install -U tensorflow_datasets
from __future__ import absolute_import, division, print_function

# Import TensorFlow and TensorFlow Datasets
import tensorflow as tf
import tensorflow_datasets as tfds
tf.logging.set_verbosity(tf.logging.ERROR)

import math
import numpy as np
import matplotlib.pyplot as plt

import tqdm
import tqdm.auto
tqdm.tqdm = tqdm.auto.tqdm
tf.enable_eager_execution()  

dataset, metadata = tfds.load('fashion_mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
num_train_examples = metadata.splits['train'].num_examples
num_test_examples = metadata.splits['test'].num_examples
def normalize(images, labels):
    images = tf.cast(images, tf.float32)
    images /= 255
    return images, labels
train_dataset =  train_dataset.map(normalize)
test_dataset  =  test_dataset.map(normalize)

model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, (3,3), padding='same', activation=tf.nn.relu,
    input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D((2, 2), strides=2),
    tf.keras.layers.Conv2D(64, (3,3), padding='same', activation=tf.nn.relu),
    tf.keras.layers.MaxPooling2D((2, 2), strides=2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation=tf.nn.relu),
    tf.keras.layers.Dense(128, activation=tf.nn.relu),
    tf.keras.layers.Dense(10,  activation=tf.nn.softmax)
    ])

model.compile(optimizer='adam', 
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy'])

BATCH_SIZE = 32
train_dataset = train_dataset.repeat().shuffle(num_train_examples).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
model.fit(train_dataset, epochs=3, 
          validation_data=test_dataset, validation_steps = math.ceil(num_test_examples/BATCH_SIZE),
          verbose = 2,
          steps_per_epoch=math.ceil(num_train_examples/BATCH_SIZE))
test_loss, test_accuracy = model.evaluate(test_dataset, steps=math.ceil(num_test_examples/32))
print('Accuracy on test dataset:', test_accuracy)

输出为

Epoch 1/3
313/313 [==============================] - 4s 12ms/step - loss: 0.3322 - acc: 0.8766
 - 49s - loss: 0.4049 - acc: 0.8516 - val_loss: 0.3322 - val_acc: 0.8766
 Epoch 2/3
313/313 [==============================] - 4s 12ms/step - loss: 0.3150 - acc: 0.8890
 - 33s - loss: 0.2631 - acc: 0.9044 - val_loss: 0.3150 - val_acc: 0.8890
Epoch 3/3
313/313 [==============================] - 4s 12ms/step - loss: 0.2484 - acc: 0.9087
 - 32s - loss: 0.2159 - acc: 0.9207 - val_loss: 0.2484 - val_acc: 0.9087
313/313 [==============================] - 4s 12ms/step - loss: 0.2484 - acc: 0.9087

Accuracy on test dataset: 0.9087

如您所见,accval_acc是相同的。我很想知道为什么。

如果有关系,tf.__version__为1.13.1。

0 个答案:

没有答案