vgg16转移学习的验证(和培训)准确性太低,导致有效损失增加

时间:2020-06-20 12:01:49

标签: python keras transfer-learning

我将使用预先在cifar10上训练的vgg16的转移学习对图像(10类)进行分类。

x_train有2000张图像(100位名人各20张图像),x_test有1000张图像(100位名人各10张图像)。

我用不同的设置尝试了几次,得到的结果是有效损失增加,有效精度太低(列车精度也很低)。

我无法解决问题...如果您需要有关代码的更多信息,请告诉我:)

val_loss graph1

result val_loss graph2

    print(len(x_train))=2000
    print(len(y_train))=2000
    print(len(x_test))=1000
    print(len(y_test))=1000
    
    print(x_train.shape)=(2000, 32, 32, 3)
    print(y_train.shape)=(2000, 100)
    print(x_test.shape)=(2000, 32, 32, 3)
    print(y_test.shape)=(2000, 100)

    # all type is numpy.ndarray
    
    # loading pretrained VGG16-cifar10
    prevgg = VGG16(weights='/content/drive/My Drive/vgg16cifar10.h5',
                   include_top=True,
                   input_shape=(32, 32, 3),
                   classes=10)
    
    prevgg.layers.pop()
    
    newvgg = Sequential()
    
    for layer in prevgg.layers:
      layer.trainable = False
      newvgg.add(layer)
    
    newvgg.add(layers.Dense(100, activation='softmax'))
    
    newvgg.compile(loss='categorical_crossentropy',
                   optimizer=optimizers.Adam(lr=0.01),
                   metrics=['acc'])
    
    es = EarlyStopping(monitor='val_loss', patience=500)
    X_train, x_val, Y_train, y_val = train_test_split(x_train, y_train, test_size=0.1, random_state=1)

    history = newvgg.fit(X_train,
                         Y_train,
                         batch_size=128,
                         epochs=100,
                         callbacks=[es],
                         validation_data=(x_val, y_val))

0 个答案:

没有答案