我正在创建一个CNN,可以将CT扫描分类为COVID-19诱发的肺炎为阳性,而健康CT为阴性。我测试了50个时代的模型。从第1个阶段到第10个阶段,它逐渐增加并在第10个阶段以99%的精度最大化。但是,在几个新的阶段之后,它急剧下降到44%,这对于二进制分类CNN来说是糟糕的。这是我的模型:
# Loading in the dataset
traindata = ImageDataGenerator(rescale=1/255)
trainingdata = traindata.flow_from_directory(
directory="Covid-19CT/TrainingData",
target_size=(128,128),
batch_size=16,
class_mode="binary")
testdata = ImageDataGenerator(rescale=1/255)
testingdata = testdata.flow_from_directory(
directory="Covid-19CT/TestingData",
target_size=(128,128),
batch_size=16,
class_mode="binary")
# Initialize the model w/ Sequential & add layers + input and output <- will refer to the VGG 16 model architecture
model = Sequential()
model.add(Conv2D(input_shape=(128,128,3),filters=64,kernel_size=(2,2),padding="same", activation="relu"))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding="same", activation ="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Flatten())
model.add(Dense(units=1000, activation="relu"))
model.add(Dense(units=1, activation="sigmoid"))
# Compile the model
model_optimizer = Adam(lr=0.001)
model.compile(optimizer=model_optimizer, loss=keras.losses.binary_crossentropy, metrics=['accuracy'])
# Add the callbacks
checkpoint = ModelCheckpoint(filepath="Covid-19.hdf5", monitor='accuracy', verbose=1, save_best_only=True, save_weights_only=False, mode='auto')
early = EarlyStopping(monitor='accuracy', min_delta=0, patience=50, verbose=1, mode='auto')
reduceLR = ReduceLROnPlateau(monitor='val_loss', factor=.5, patience=2, min_delta=0.01, mode="auto")
fit = model.fit_generator(steps_per_epoch=25, generator=trainingdata, validation_data=testingdata, validation_steps=10,epochs=50,callbacks=[checkpoint,early])
以下是培训的准确性: 时代1/50 24/25 [========================== ..]-ETA:12秒-损失:0.7703-准确性:0.5495 时代00001:精度从-inf提高到0.54500,将模型保存到Covid-19.hdf5 25/25 [=============================]-360s 14s / step-损耗:0.7646-精度:0.5450-val_loss :0.6984-val_accuracy:0.4313
史诗2/50 24/25 [=========================== ..]-ETA:13秒-损失:0.8016-准确性:0.7240 时代00002:精度从0.54500提高到0.73250,将模型保存到Covid-19.hdf5 25/25 [==============================]-374s 15s / step-损耗:0.7868-精度:0.7325-val_loss :4.4926-val_accuracy:0.6375
第3/50集 24/25 [=========================== ..]-ETA:11秒-损失:0.3555-准确性:0.8960 时代00003:精度从0.73250提高到0.89003,将模型保存到Covid-19.hdf5 25/25 [==============================]-344s 14s / step-损耗:0.3941-精度:0.8900-val_loss :3.2895-val_accuracy:0.7000
第4/50集 24/25 [=========================== ..]-ETA:13秒-损失:0.2441-准确性:0.9297 时代00004:精度从0.89003提高到0.93000,将模型保存到Covid-19.hdf5 25/25 [==============================]-385s 15s / step-损耗:0.2387-精度:0.9300-val_loss :1.2085-val_accuracy:0.6687
史诗5/50 24/25 [=========================== ..]-ETA:13秒-损失:0.1788-准确性:0.9714 时代00005:精度从0.93000提高到0.97250,将模型保存到Covid-19.hdf5 25/25 [==============================]-381s 15s / step-损耗:0.1755-精度:0.9725-val_loss :2.5818-val_accuracy:0.7125
第六章50 24/25 [=========================== ..]-ETA:12秒-损失:0.0642-准确性:0.9844 时代00006:精度从0.97250提高到0.98000,将模型保存到Covid-19.hdf5 25/25 [==============================]-363s 15s / step-损耗:0.0670-精度:0.9800-val_loss :4.4083-val_accuracy:0.7125
史诗7/50 24/25 [=========================== ..]-ETA:12秒-损失:0.0947-准确性:0.9479 纪元00007:准确度从0.98000没有提高 25/25 [=============================]-362秒14秒/步-损耗:0.0937-精度:0.9500-val_loss :4.2777-val_accuracy:0.7000
第8/50集 24/25 [=========================== ..]-ETA:13秒-损失:0.1298-准确性:0.9505 时代00008:准确度从0.98000没有提高 25/25 [==============================]-375秒15秒/步-损耗:0.1301-精度:0.9475-val_loss :1.5817-val_accuracy:0.4688
时代9/50 24/25 [=========================== ..]-ETA:13秒-损失:0.0506-准确性:0.9740 时代00009:准确度从0.98000没有提高 25/25 [==============================]-378s 15s / step-损耗:0.0486-精度:0.9750-val_loss :4.3898-val_accuracy:0.7125
时代10/50 24/25 [=========================== ..]-ETA:12秒-损失:0.0263-准确性:0.9922 时代00010:精度从0.98000提高到0.99250,将模型保存到Covid-19.hdf5 25/25 [=============================]-368秒15秒/步-损耗:0.0252-精度:0.9925-val_loss :4.3956-val_accuracy:0.6875
时代11/50 24/25 [=========================== ..]-ETA:12秒-损失:0.1428-准确性:0.9714 纪元00011:准确度从0.99250没有提高 25/25 [==============================]-346s 14s / step-损耗:0.1378-精度:0.9725-val_loss :2.3141-val_accuracy:0.5188
第12/50集 24/25 [=========================== ..]-ETA:11秒-损失:0.2058-准确性:0.9479 纪元00012:准确度从0.99250没有提高 25/25 [==============================]-343秒14秒/步-损耗:0.2006-精度:0.9500-val_loss :2.2401-val_accuracy:0.6750
第13/50集 24/25 [=========================== ..]-ETA:12秒-损失:0.0434-准确性:0.9818 纪元00013:精度从0.99250没有提高 25/25 [==============================]-363秒15秒/步-损耗:0.0417-精度:0.9825-val_loss :4.3546-val_accuracy:0.7000
第14/50集 24/25 [========================== ..]-ETA:12秒-损失:0.0242-准确性:0.9974 时代00014:精度从0.99250提高到0.99750,将模型保存到Covid-19.hdf5 25/25 [==============================]-361s 14s / step-损耗:0.0256-精度:0.9975-val_loss :4.4083-val_accuracy:0.7125
第15/50集 24/25 [=========================== ..]-ETA:12秒-损失:0.0298-准确性:0.9922 纪元00015:精度从0.99750没有提高 25/25 [==============================]-367s 15s / step-损耗:0.0286-精度:0.9925-val_loss :3.9429-val_accuracy:0.7125
第16/50集 24/25 [=========================== ..]-ETA:11秒-损失:0.0045-准确性:0.9974 纪元00016:精度从0.99750没有提高 25/25 [=============================]-338s 14s / step-损耗:0.0043-精度:0.9975-val_loss :4.4335-val_accuracy:0.7063
第17/50集 24/25 [=========================== ..]-ETA:11秒-损失:0.2831-准确性:0.9479 时代00017:准确度从0.99750没有提高 25/25 [==============================]-336s 13s / step-损耗:0.2750-精度:0.9500-val_loss :2.4855-val_accuracy:0.6625
第18/50集 24/25 [=========================== ..]-ETA:14秒-损失:1.4282-准确性:0.9036 纪元00018:精度从0.99750没有提高 25/25 [=============================]-400s 16s / step-损耗:1.6394-精度:0.8900-val_loss :6.6125-val_accuracy:0.5688
第19/50集 24/25 [=========================== ..]-ETA:12秒-损失:8.0488-准确性:0.4693 纪元00019:准确度从0.99750没有提高 25/25 [==============================]-349s 14s / step-损耗:7.9984-精度:0.4731-val_loss :6.6125-val_accuracy:0.5688
第20/50集 24/25 [=========================== ..]-ETA:11秒-损失:7.6267-准确性:0.5026 时代00020:精度从0.99750并未提高 25/25 [=============================]-342秒14秒/步-损耗:7.5900-精度:0.5050-val_loss :6.6125-val_accuracy:0.5688
第21/50集 24/25 [=========================== ..]-ETA:12秒-损失:8.2656-准确性:0.4609
我认为我最终将使用辍学层替换“提前停止”,但无法理解准确性下降的原因。虽然该模型可能以99%的精度过度拟合,但为什么会突然下降呢?
答案 0 :(得分:1)
如果愿意,可以按照建议的方式进行有关尽早停止或减少次数的训练,我注意到您正在监视回调中的“准确性”。通常最好监视验证损失,并以最低的损失保存模型。验证损失表明您的模型可以很好地推广到看不见的图像。仅供参考,我注意到您正在保存整个模型。很好,但是会大大减慢完成培训所需的时间。尝试仅将权重保存在检查点回调中。它的速度要快得多,特别是如果您有大型模型。然后,在训练完成后,运行model.load_weights进行预测。然后,您可以使用model.save保存整个模型。所有这一切都没有解决为什么您的模型训练损失在第十个纪元后开始飙升的问题。我不知道。在注释中将我指向数据集的位置,我将查看是否可以重复该问题。快速检查一下您有多少训练样本和多少测试样本?您希望每个时期进行一次验证集。因此,您需要设置validation_batch_size,以使验证样本数除以validation_batch_size是整数。使用该整数作为validation_steps。以下是一段代码,给出了目录路径和基于内存容量可以允许的最大批处理大小(b_max),这将为您提供batch_size和步骤。它遍历目录(例如test_dir),然后将所有子目录中所有文件的计数(可变长度)加起来,然后确定批处理大小和步骤。
def get_bs(dir,b_max):
# dir is the directory containing the samples, b_max is maximum batch size to allow based on your memory capacity
# you only want to go through test and validation set once per epoch this function determines needed batch size ans steps per epoch
length=0
dir_list=os.listdir(dir)
for d in dir_list:
d_path=os.path.join (dir,d)
length=length + len(os.listdir(d_path))
batch_size=sorted([int(length/n) for n in range(1,length+1) if length % n ==0 and length/n<=b_max],reverse=True)[0]
return batch_size,int(length/batch_size), length
如果在测试目录上运行此命令,则应该得到batch_size = 79,步长= 14,长度= 1106,b_max设置为80。