Keras CNN总是预测同一个类别

时间:2020-07-08 03:51:26

标签: python tensorflow keras conv-neural-network

编辑:看来我甚至没有为足够的时间运行模型,所以我将尝试一下并返回结果

我正在尝试创建对3D脑图像进行分类的CNN。但是,当我运行CNN程序时,它总是预测相同的类,并且不确定是否可以采取其他方法来防止这种情况。我已经用许多可行的解决方案搜索了这个问题,但是它们没有用

到目前为止,我已经尝试过:

对于上下文,我在两组之间进行分类。我使用的图像总量是200张3D脑部图像(每个类别大约100张)。为了增加培训规模,我使用了从github找到的自定义数据增强

根据学习曲线,准确性和丢失率是完全随机的。一些运行会在一定范围内减少,一些增加以及一些波动

任何帮助将不胜感激!

import os
import csv
import tensorflow as tf  # 2.0
import nibabel as nib
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from keras.models import Model
from keras.layers import Conv3D, MaxPooling3D, Dense, Dropout, Activation, Flatten 
from keras.layers import Input, concatenate
from keras import optimizers
from keras.utils import to_categorical
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
from augmentedvolumetricimagegenerator.generator import customImageDataGenerator
from keras.callbacks import EarlyStopping


# Administrative items
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# Where the file is located
path = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline'
folder = os.listdir(path)

target_size = (96, 96, 96)


# creating x - converting images to array
def read_image(path, folder):
    mri = []
    for i in range(len(folder)):
        files = os.listdir(path + '\\' + folder[i])
        for j in range(len(files)):
            image = np.array(nib.load(path + '\\' + folder[i] + '\\' + files[j]).get_fdata())
            image = np.resize(image, target_size)
            image = np.expand_dims(image, axis=3)
            image /= 255.
            mri.append(image)
    return mri

# creating y - one hot encoder
def create_y():
    excel_file = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline_label.xlsx'
    excel_read = pd.read_excel(excel_file)
    excel_array = np.array(excel_read['Label'])
    label = LabelEncoder().fit_transform(excel_array)
    label = label.reshape(len(label), 1)
    onehot = OneHotEncoder(sparse=False).fit_transform(label)
    return onehot

# Splitting image train/test
x = np.asarray(read_image(path, folder))
y = np.asarray(create_y())
x_split, x_test, y_split, y_test = train_test_split(x, y, test_size=.2, stratify=y)
x_train, x_val, y_train, y_val = train_test_split(x_split, y_split, test_size=.25, stratify=y_split)
print(x_train.shape, x_val.shape, x_test.shape, y_train.shape, y_val.shape, y_test.shape)


batch_size = 10
num_classes = len(folder)

inputs = Input((96, 96, 96, 1))
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(inputs)
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv1)
drop1 = Dropout(0.5)(pool1)

conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(drop1)
conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv2)
drop2 = Dropout(0.5)(pool2)

conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(drop2)
conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv3)
drop3 = Dropout(0.5)(pool3)

flat1 = Flatten()(drop3)
dense1 = Dense(128, activation='relu')(flat1)
drop5 = Dropout(0.5)(dense1)
dense2 = Dense(num_classes, activation='sigmoid')(drop5)

model = Model(inputs=[inputs], outputs=[dense2])

opt = optimizers.Adagrad(lr=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])


train_datagen = customImageDataGenerator(
                                         horizontal_flip=True
                                        )

val_datagen = customImageDataGenerator()

training_set = train_datagen.flow(x_train, y_train, batch_size=batch_size)

validation_set = val_datagen.flow(x_val, y_val, batch_size=batch_size)


callbacks = EarlyStopping(monitor='val_loss', patience=3)

history = model.fit_generator(training_set,
                    steps_per_epoch = 10,
                    epochs = 20,
                    validation_steps = 5,
                    callbacks = [callbacks],
                    validation_data = validation_set)

score = model.evaluate(x_test, y_test, batch_size=batch_size)
print(score)


y_pred = model.predict(x_test, batch_size=batch_size)
y_test = np.argmax(y_test, axis=1)
y_pred = np.argmax(y_pred, axis=1)
confusion = confusion_matrix(y_test, y_pred)
map = sns.heatmap(confusion, annot=True)
print(map)


acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']

plt.figure(1)
plt.plot(acc)
plt.plot(val_acc)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.title('Accuracy')

plt.figure(2)
plt.plot(loss)
plt.plot(val_loss)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.title('Loss')

您可以在这里找到输出:https://i.stack.imgur.com/FF13P.jpg

2 个答案:

答案 0 :(得分:1)

没有数据集本身很难提供帮助。尽管我会测试一两件事:

  • 我发现ReLU激活不适用于密集层,这可能导致单类预测。尝试用其他东西(Sigmoid,tanh)替换Dense(128)层中的relu
  • 通常,降幅并不是真的适合图像,您可能需要查看DropBlock
  • 初始学习率很低,我将从1e-3或1e-4之间的数字开始
  • 我经常发生的愚蠢事情:您是否形象化了图像/标签组合以确保每个图像都有正确的标签?

同样,不确定它是否能解决所有问题,但我希望它可能会有所帮助!

答案 1 :(得分:1)

可能有很多事情,但是错误行为可能是由数据本身引起的。

仅通过查看代码,似乎您就没有像对训练和验证数据所做的相同方式在调用model.predictmodel.evaluate之前对测试数据进行规范化了。 / p>

我曾经有过类似的问题,事实证明这是原因。作为快速测试,您只需重新调整测试数据,看看是否有帮助。