在10000张图像的vgg网络训练期间的停滞验证准确性

时间:2018-02-05 09:44:46

标签: python-3.x tensorflow machine-learning deep-learning keras

我有10000张图像5000病了医学图像和5000张健康图像, 我使用了vgg16并修改了最后一层,如下所示

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 256)               6422784   
_________________________________________________________________
fc2 (Dense)                  (None, 128)               32896     
_________________________________________________________________
output (Dense)               (None, 2)                 258       
=================================================================
Total params: 21,170,626
Trainable params: 6,455,938
Non-trainable params: 14,714,688

我的代码如下

import numpy as np
import os
import time
from vgg16 import VGG16
from keras.preprocessing import image
from imagenet_utils import preprocess_input, decode_predictions
from keras.layers import Dense, Activation, Flatten
from keras.layers import merge, Input
from keras.models import Model
from keras.utils import np_utils
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split

# Loading the training data
PATH = '/mount'
# Define data path
data_path = PATH 
data_dir_list = os.listdir(data_path)

img_data_list=[]
y=0;
for dataset in data_dir_list:
    img_list=os.listdir(data_path+'/'+ dataset)
    print ('Loaded the images of dataset-'+'{}\n'.format(dataset))
    for img in img_list:
        img_path = data_path + '/'+ dataset + '/'+ img 
        img = image.load_img(img_path, target_size=(224, 224))
        x = image.img_to_array(img)
        x = np.expand_dims(x, axis=0)
        x = preprocess_input(x)
        x = x/255

        y=y+1
        print('Input image shape:', x.shape)
        print(y)
        img_data_list.append(x)
from keras.optimizers import SGD
sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)

img_data = np.array(img_data_list)
#img_data = img_data.astype('float32')
print (img_data.shape)
img_data=np.rollaxis(img_data,1,0)
print (img_data.shape)
img_data=img_data[0]
print (img_data.shape)

# Define the number of classes
num_classes = 2
num_of_samples = img_data.shape[0]
labels = np.ones((num_of_samples,),dtype='int64')

labels[0:5001]=0
labels[5001:]=1

names = ['YES','NO']

# convert class labels to on-hot encoding
Y = np_utils.to_categorical(labels, num_classes)

#Shuffle the dataset
x,y = shuffle(img_data,Y, random_state=2)
# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=2)

image_input = Input(shape=(224, 224, 3))

model = VGG16(input_tensor=image_input, include_top=True,weights='imagenet')

model.summary()

last_layer = model.get_layer('block5_pool').output
x= Flatten(name='flatten')(last_layer)
x = Dense(256, activation='relu', name='fc1')(x)
x = Dense(128, activation='relu', name='fc2')(x)
out = Dense(num_classes, activation='softmax', name='output')(x)
custom_vgg_model2 = Model(image_input, out)
custom_vgg_model2.summary()

# freeze all the layers except the dense layers
for layer in custom_vgg_model2.layers[:-3]:
    layer.trainable = False

custom_vgg_model2.summary()

custom_vgg_model2.compile(loss='categorical_crossentropy',optimizer=sgd,metrics=['accuracy'])

t=time.time()
#   t = now()
hist = custom_vgg_model2.fit(X_train, y_train, batch_size=128, epochs=50, verbose=1, validation_data=(X_test, y_test))
print('Training time: %s' % (t - time.time()))
(loss, accuracy) = custom_vgg_model2.evaluate(X_test, y_test, batch_size=10, verbose=1)

print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss,accuracy * 100))
model.save("vgg_10000.h5")

结果我发布了前5个和最后5个时期

Epoch 1/50
8000/8000 [==============================] - 154s - loss: 0.6960 - acc: 0.5354 - val_loss: 0.6777 - val_acc: 0.5745
Epoch 2/50
8000/8000 [==============================] - 134s - loss: 0.6684 - acc: 0.5899 - val_loss: 0.6866 - val_acc: 0.5490
Epoch 3/50
8000/8000 [==============================] - 134s - loss: 0.6608 - acc: 0.6040 - val_loss: 0.6625 - val_acc: 0.5925
Epoch 4/50
8000/8000 [==============================] - 134s - loss: 0.6518 - acc: 0.6115 - val_loss: 0.6668 - val_acc: 0.5810
Epoch 5/50
8000/8000 [==============================] - 134s - loss: 0.6440 - acc: 0.6280 - val_loss: 0.6990 - val_acc: 0.5580

最后5

Epoch 25/50
8000/8000 [==============================] - 134s - loss: 0.5944 - acc: 0.6720 - val_loss: 0.6271 - val_acc: 0.6485
Epoch 26/50
8000/8000 [==============================] - 134s - loss: 0.5989 - acc: 0.6699 - val_loss: 0.6483 - val_acc: 0.6135
Epoch 27/50
8000/8000 [==============================] - 134s - loss: 0.5950 - acc: 0.6789 - val_loss: 0.7130 - val_acc: 0.5785
Epoch 28/50
8000/8000 [==============================] - 134s - loss: 0.5853 - acc: 0.6838 - val_loss: 0.6263 - val_acc: 0.6395

结果并不是很好我在最后两层使用adam优化器等使用128和128个节点进行了调整仍然没有那么令人信服。任何帮助都很受欢迎。

1 个答案:

答案 0 :(得分:1)

您可以尝试以下操作:

  • 执行分层train_test_split

    train_test_split(x, y, stratify=y, test_size=0.2, random_state=2)
    
  • 查看您的数据,看看图片中是否有异常值。

  • 使用adam优化工具:from keras.optimizers import Adam代替SGD
  • 在适用的地方尝试其他种子,即代替random_state=2,使用别的东西:

    X_train, X_test, y_train, y_test = train_test_split(
                                   x, y, test_size=0.2, random_state=382938)
    
  • 尝试include_top=False

    model = VGG16(input_tensor=image_input, include_top=False,weights='imagenet')
    
  • 使用(train, validation, test)集或(cross-validation, holdout)集来获得更可靠的效果指标。