KERAS 低拟合损失和高损失评估

时间:2021-01-18 04:50:12

标签: python tensorflow keras conv-neural-network

我是 keras 的新手。此代码用于对有或没有肿瘤的大脑的 MRI 图像进行分类。当我运行 model.evaluate() 以查看准确度时,即使我在训练模型时损失值很低(正常情况下小于 1),我也会得到非常高的损失值,并且出现以下错误:

WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x00000221AC143AF0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.

大部分代码是从这个 link 复制的。

完整代码如下:

import numpy as np
import matplotlib.pyplot as plt
import os
import cv2

import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D

def load_data( DATADIR, IMG_SIZE, CATEGORIES ):
    data = []
    for category in CATEGORIES:  # do dogs and cats
        
        path = os.path.join(DATADIR,category)  # create path to dogs and cats
        class_num = CATEGORIES.index(category)  # get the classification  (0 or a 1). 0=dog 1=cat

        for img in os.listdir(path):  # iterate over each image per dogs and cats
            try:
                img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)  # convert to array
                
                img_array = cv2.medianBlur(img_array,5)
                
                img_array = cv2.adaptiveThreshold(img_array,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
                
                new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))  # resize to normalize data size
                
                data.append([new_array, class_num])  # add this to our training_data
            except Exception as e:  # in the interest in keeping the output clean...
                pass
            #except OSError as e:
            #    print("OSErrroBad img most likely", e, os.path.join(path,img))
            #except Exception as e:
            #    print("general exception", e, os.path.join(path,img))
    return data

TRAIN_DATADIR = "F:\Train"
TEST_DATADIR = "F:\Test"

CATEGORIES = ["no", "yes"]
IMG_SIZE = 128
training_data = load_data(TRAIN_DATADIR, IMG_SIZE, CATEGORIES)
testing_data = load_data(TEST_DATADIR, IMG_SIZE, CATEGORIES)

print(len(training_data))

import random
random.shuffle(training_data)
random.shuffle(testing_data)

X_train = []
y_train = []

for features,label in training_data:
    X_train.append(features)
    y_train.append(label)

#print(X[0].reshape(-1, IMG_SIZE, IMG_SIZE, 1))

X_train = np.asarray(X_train)
y_train = np.asarray(y_train)

X_train = np.array(X_train).reshape(-1, IMG_SIZE, IMG_SIZE, 1)


X_test = []
y_test = []

for features,label in testing_data:
    X_test.append(features)
    y_test.append(label)

    
X_test = np.asarray(X_test)
y_test = np.asarray(y_test)
#print(X[0].reshape(-1, IMG_SIZE, IMG_SIZE, 1))

X_test = np.array(X_test).reshape(-1, IMG_SIZE, IMG_SIZE, 1)

X_train = X_train/255.0


model = Sequential()

model.add(Conv2D(32, (3, 3), input_shape = X_train.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(X_train, y_train, batch_size=10, epochs=15)

score = model.evaluate(X_test, y_test,verbose=1)

1 个答案:

答案 0 :(得分:1)

忽略警告。

您的低训练损失和高评估损失意味着您的模型过度拟合。当您的验证准确率开始提高时停止训练。