为什么Keras在model.evaluate,model.predicts和model.fit之间给我不同的结果?

时间:2020-05-17 12:23:35

标签: python tensorflow2.0 tf.keras

我正在一个基于resnet50的双输出模型的项目中。一个输出用于回归任务,第二个输出用于分类任务。

我的主要问题是关于模型评估。在培训期间,我在验证集的两个输出上均取得了不错的成绩:
-综合损失= 0.32507268732786176
-Val精度= 0.97375
-Val MSE:4.1454763

在同一集合上,model.evaluate给了我以下结果:
-综合损失= 0.33064378452301024
-Val精度= 0.976
-Val MSE = 1.2375486

model.predict给我完全不同的结果(我使用scikit-learn计算指标):
-Val精度= 0.45875
-Val MSE:43.555958365743805
这些最后的值在每次预测执行时都会改变。

我在TF2.0上工作。 这是我的代码:

valid_generator=datagen.flow_from_dataframe(dataframe=df, 
                                            directory=PATH, 
                                            x_col="X", 
                                            y_col=["yReg","yCls"],  
                                            class_mode="multi_output", 
                                            target_size=(IMG_SIZE, IMG_SIZE), 
                                            batch_size=batch_size,
                                            subset="validation",
                                            shuffle=False,
                                            workers = 0)
def generate_data_generator(generator, train=True):
    while True:
        Xi, yi = train_generator.next()
        y2 = []
        for e in yi[1]:
            y2.append(to_categorical(e, 7))
        y2 = np.array(y2)
        if train: # Augmentation for training only
            Xi = Xi.astype('uint8')
            Xi_aug = seq(images=Xi) # imgaug lib needs uint8
            Xi_aug = Xi_aug.astype('float32')
            Xi_aug = preprocess_input(Xi_aug) # resnet50 preprocessing
            yield Xi_aug, [yi[0], y2]
        else: # Validation
            yield preprocess_input(Xi), [yi[0], y2]


model.fit_generator(generator=generate_data_generator(train_generator, True),
    steps_per_epoch=STEP_SIZE_TRAIN,
    validation_data=generate_data_generator(valid_generator, False),
    validation_steps=STEP_SIZE_VALID,
    verbose=1, 
    epochs=50, 
    callbacks=[checkpoint, tfBoard],
    )
evalu = model.evaluate_generator(generate_data_generator(valid_generator, False), steps=STEP_SIZE_VALID)
print(model.metrics_names)
print(evalu)
preds = model.predict_generator(generate_data_generator(valid_generator, False), steps=STEP_SIZE_VALID, workers = 0)
labels = valid_generator.labels

print("MSE error:", me.mean_squared_error(labels[0], preds[0]))
print("Accuracy:", me.accuracy_score(labels[1], preds[1].argmax(axis=1)))

我在做什么错了?

感谢您的帮助!

1 个答案:

答案 0 :(得分:0)

您仅使用一个数据点labels[1], preds[1]而不是所有数据点来计算精度。您需要考虑所有数据点来计算准确性,以将结果与model.evaluate_generator进行比较。另外,您已经在MSE个数据点上计算了labels[0], preds[0],但是在labels[1], preds[1]个数据点上计算了准确性。考虑这两种情况下的所有数据点。

下面是二进制分类的示例,其中我没有为验证数据做任何数据扩充。您可以构建不使用增强的验证生成器,并将shuffle=False设置为每次生成同一批数据,因此对于model.evaluate_generatormodel.predict_generator,您将获得相同的结果。

验证生成器-

validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data

val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
                                                              directory=validation_dir,
                                                              shuffle=False,
                                                              seed=10,
                                                              target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                              class_mode='binary')

以下是完全匹配的准确性结果-

model.fit_generator

history = model.fit_generator(
          train_data_gen,
          steps_per_epoch=total_train // batch_size,
          epochs=5,
          validation_data=val_data_gen,
          validation_steps=total_val // batch_size)

输出-

Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/5
20/20 [==============================] - 27s 1s/step - loss: 0.8691 - accuracy: 0.4995 - val_loss: 0.6850 - val_accuracy: 0.5000
Epoch 2/5
20/20 [==============================] - 26s 1s/step - loss: 0.6909 - accuracy: 0.5145 - val_loss: 0.6880 - val_accuracy: 0.5000
Epoch 3/5
20/20 [==============================] - 26s 1s/step - loss: 0.6682 - accuracy: 0.5345 - val_loss: 0.6446 - val_accuracy: 0.6320
Epoch 4/5
20/20 [==============================] - 26s 1s/step - loss: 0.6245 - accuracy: 0.6180 - val_loss: 0.6214 - val_accuracy: 0.5920
Epoch 5/5
20/20 [==============================] - 27s 1s/step - loss: 0.5696 - accuracy: 0.6795 - val_loss: 0.6468 - val_accuracy: 0.6270

model.evaluate_generator

evalu = model.evaluate_generator(val_data_gen)
print(model.metrics_names)
print(evalu)

输出-

['loss', 'accuracy']
[0.646793782711029, 0.6269999742507935]

model.predict_generator

from sklearn.metrics import mean_squared_error, accuracy_score
preds = model.predict_generator(val_data_gen)
y_pred = tf.where(preds<=0.5,0,1)

labels = val_data_gen.labels
y_true = labels

# confusion_matrix(y_true, y_pred)
print("Accuracy:", accuracy_score(y_true, y_pred))

输出-

Accuracy: 0.627

完整的代码供您参考-

%tensorflow_version 2.x
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam

import os
import numpy as np
import matplotlib.pyplot as plt

_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'

path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)

PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')

train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')

train_cats_dir = os.path.join(train_dir, 'cats')  # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')  # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')  # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')  # directory with our validation dog pictures

num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))

num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))

total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val

batch_size = 100
epochs = 5
IMG_HEIGHT = 150
IMG_WIDTH = 150

train_image_generator = ImageDataGenerator(rescale=1./255,brightness_range=[0.5,1.5]) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data

train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
                                                           directory=train_dir,
                                                           shuffle=True,
                                                           target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                           class_mode='binary')

val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
                                                              directory=validation_dir,
                                                              shuffle=False,
                                                              seed=10,
                                                              target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                              class_mode='binary')

model = Sequential([
    Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(),
    Conv2D(32, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Conv2D(64, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Flatten(),
    Dense(512, activation='relu'),
    Dense(1)
])

model.compile(optimizer="adam", 
          loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
          metrics=['accuracy'])

history = model.fit_generator(
          train_data_gen,
          steps_per_epoch=total_train // batch_size,
          epochs=epochs,
          validation_data=val_data_gen,
          validation_steps=total_val // batch_size)

evalu = model.evaluate_generator(val_data_gen, steps=total_val // batch_size)
print(model.metrics_names)
print(evalu)

from sklearn.metrics import mean_squared_error, accuracy_score
#val_data_gen.reset()
preds = model.predict_generator(val_data_gen, steps=total_val // batch_size)
y_pred = tf.where(preds<=0.5,0,1)

labels = val_data_gen.labels
y_true = labels

test_labels = []

for i in range(0,10):
    test_labels.extend(np.array(val_data_gen[i][1]))

# confusion_matrix(y_true, y_pred)
print("Accuracy:", accuracy_score(test_labels, y_pred))

还请记住,fit_generatorevaluate_generatorpredict_generator功能已被弃用。它将在将来的版本中删除。更新说明:请分别使用支持生成器的Model.fit,Model.evaluate和Model.predict。

希望这能回答您的问题。学习愉快。