Github链接至代码- https://github.com/abhijit1247/Resnet50_trial1.git
我正在尝试将转移学习用于DeepSAT-6数据集上的卫星图像分类。
数据集链接- https://www.kaggle.com/crawford/deepsat-sat6
我的基本模型是 Resnet50 。 我正在尝试遵循这种训练策略https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html,即首先根据完全冻结的卷积层的输出分别训练顶层,然后将顶层与预先训练的权重连接在一起,然后开始解冻卷积块。
顶层-
model=Sequential([
Dense(1024,input_dim=2048),
BatchNormalization(),
Activation('relu'),
Dense(256),
BatchNormalization(),
Activation('relu'),
Dense(6, activation='softmax'),
])
训练了50个纪元后,训练准确度达到了 95.09 %,跨Val准确性达到了 93.71 %
然后,我将此顶部附加到卷积块上,并解冻最底部的常规块-
对275400个样本进行训练,对48600个样本进行验证。
Epoch 1/10
275400/275400 [==============================] - 649s 2ms/step - loss: 0.0962 - accuracy: 0.9656 - val_loss: 6.1452 - val_accuracy: 0.1554
Epoch 2/10
275400/275400 [==============================] - 652s 2ms/step - loss: 0.0835 - accuracy: 0.9700 - val_loss: 5.5609 - val_accuracy: 0.1554
Epoch 3/10
275400/275400 [==============================] - 665s 2ms/step - loss: 0.0745 - accuracy: 0.9734 - val_loss: 6.6450 - val_accuracy: 0.1554
Epoch 4/10
275400/275400 [==============================] - 663s 2ms/step - loss: 0.0680 - accuracy: 0.9758 - val_loss: 6.4879 - val_accuracy: 0.1554
Epoch 5/10
275400/275400 [==============================] - 678s 2ms/step - loss: 0.0634 - accuracy: 0.9775 - val_loss: 6.2436 - val_accuracy: 0.1554
Epoch 6/10
275400/275400 [==============================] - 651s 2ms/step - loss: 0.0589 - accuracy: 0.9789 - val_loss: 7.9822 - val_accuracy: 0.1554
Epoch 7/10
275400/275400 [==============================] - 662s 2ms/step - loss: 0.0555 - accuracy: 0.9803 - val_loss: 9.0204 - val_accuracy: 0.1554
Epoch 8/10
275400/275400 [==============================] - 701s 3ms/step - loss: 0.0521 - accuracy: 0.9812 - val_loss: 8.3389 - val_accuracy: 0.1554
Epoch 9/10
275400/275400 [==============================] - 669s 2ms/step - loss: 0.0502 - accuracy: 0.9824 - val_loss: 8.9311 - val_accuracy: 0.1554
那么为什么交叉验证损失表现得如此奇怪?为什么它不随着时代而减少?