x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.7)(x)
predictions = Dense(num_classes, activation= 'softmax')(x)
model = Model(inputs = base_model.input, outputs = predictions)
数据拆分:
from google.colab import drive
drive.mount('/content/drive')
img_width, img_height = 160, 160
train_data_dir =( "/content/drive/My Drive/Skin_cancer_all/skin_cancer")
train_datagen = ImageDataGenerator(rescale=1./255,
validation_split=0.2) # set validation split
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=8,
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_data_dir, # same directory as training data
target_size=(img_width, img_height),
batch_size=8,
subset='validation') # set as validation data
我用过fit_Generator:
nb_epochs = 4
nb_train_samples = 8015
nb_validation_samples = 2000
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# but it used to start model.
history=model.fit_generator(train_generator,
steps_per_epoch=nb_train_samples,
epochs=nb_epochs
, validation_data=validation_generator
, validation_steps=nb_validation_samples)
我从4个时代开始
Epoch 1/4
8014/8015 [============================>.] - ETA: 0s - loss: 1.2813 - acc: 0.6681Epoch 1/4
8015/8015 [==============================] - 4406s 550ms/step - loss: 1.2812 - acc: 0.6681 - val_loss: 2.0039 - val_acc: 0.6750
Epoch 2/4
8014/8015 [============================>.] - ETA: 0s - loss: 1.1651 - acc: 0.6749Epoch 1/4
8015/8015 [==============================] - 1872s 234ms/step - loss: 1.1651 - acc: 0.6749 - val_loss: 10.8641 - val_acc: 0.6720
Epoch 3/4
8014/8015 [============================>.] - ETA: 0s - loss: 1.0921 - acc: 0.6818Epoch 1/4
8015/8015 [==============================] - 1881s 235ms/step - loss: 1.0921 - acc: 0.6818 - val_loss: 87.3402 - val_acc: 0.6920
Epoch 4/4
8014/8015 [============================>.] - ETA: 0s - loss: 1.0539 - acc: 0.6947Epoch 1/4
8015/8015 [==============================] - 1898s 237ms/step - loss: 1.0538 - acc: 0.6947 - val_loss: 126.0995 - val_acc: 0.7140
看起来不是很好,但是我的val_loss太高了,我认为这不仅仅是过度拟合 ,如果是我的数据集被分为2000 val和8000 train的话,那将是另一个问题,因为我之前从未见过此val_loss,而我之前和另一个模型(resnet-50)仍在训练相同的数据集,并且做了val_loss值不高(某些时期后,我看到的resnet 50的最大值是5)
我之前看到过合适的效果val_acc,但它仍然得到了改善,火车损耗也继续减少
我只想对此行为做出解释! 如果一切都很好,但val_loss和val_acc是固定的抛出时期怎么办?