验证准确性非常低,训练准确性非常高python

时间:2020-10-03 16:05:51

标签: python tensorflow machine-learning keras neural-network

我正在训练深度学习模型,但准确性很低。我使用L2正则化来停止过度拟合并获得较高的精度,但是并不能解决问题。如此低的准确度是什么原因,我该如何制止它?

模型精度几乎是完美的(> 90%),而验证精度却很低(<51%)(如下所示)

Epoch 1/15
2601/2601 - 38s - loss: 1.6510 - accuracy: 0.5125 - val_loss: 1.6108 - val_accuracy: 0.4706
Epoch 2/15
2601/2601 - 38s - loss: 1.1733 - accuracy: 0.7009 - val_loss: 1.5660 - val_accuracy: 0.4971
Epoch 3/15
2601/2601 - 38s - loss: 0.9169 - accuracy: 0.8147 - val_loss: 1.6223 - val_accuracy: 0.4948
Epoch 4/15
2601/2601 - 38s - loss: 0.7820 - accuracy: 0.8551 - val_loss: 1.7773 - val_accuracy: 0.4683
Epoch 5/15
2601/2601 - 38s - loss: 0.6539 - accuracy: 0.8989 - val_loss: 1.7968 - val_accuracy: 0.4937
Epoch 6/15
2601/2601 - 38s - loss: 0.5691 - accuracy: 0.9204 - val_loss: 1.8743 - val_accuracy: 0.4844
Epoch 7/15
2601/2601 - 38s - loss: 0.5090 - accuracy: 0.9327 - val_loss: 1.9348 - val_accuracy: 0.5029
Epoch 8/15
2601/2601 - 40s - loss: 0.4465 - accuracy: 0.9500 - val_loss: 1.9566 - val_accuracy: 0.4787
Epoch 9/15
2601/2601 - 38s - loss: 0.3931 - accuracy: 0.9596 - val_loss: 2.0824 - val_accuracy: 0.4764
Epoch 10/15
2601/2601 - 41s - loss: 0.3786 - accuracy: 0.9596 - val_loss: 2.1185 - val_accuracy: 0.4925
Epoch 11/15
2601/2601 - 38s - loss: 0.3471 - accuracy: 0.9604 - val_loss: 2.1972 - val_accuracy: 0.4879
Epoch 12/15
2601/2601 - 38s - loss: 0.3169 - accuracy: 0.9669 - val_loss: 2.1091 - val_accuracy: 0.4948
Epoch 13/15
2601/2601 - 38s - loss: 0.3018 - accuracy: 0.9685 - val_loss: 2.2073 - val_accuracy: 0.5006
Epoch 14/15
2601/2601 - 38s - loss: 0.2629 - accuracy: 0.9746 - val_loss: 2.2086 - val_accuracy: 0.4971
Epoch 15/15
2601/2601 - 38s - loss: 0.2700 - accuracy: 0.9650 - val_loss: 2.2178 - val_accuracy: 0.4879

我试图增加时期数,但这只会提高模型的准确性,并降低验证的准确性。

有关如何解决此问题的任何建议?

我的代码:

def createModel():
    input_shape=(11, 3840,1)
    model = Sequential()
    #C1
    model.add(Conv2D(16, (5, 5), strides=( 2, 2), padding='same',activation='relu', input_shape=input_shape))
    model.add(keras.layers.MaxPooling2D(pool_size=( 2, 2),  padding='same'))
    model.add(BatchNormalization())
    #C2
    model.add(Conv2D(32, ( 3, 3), strides=(1,1), padding='same',  activation='relu'))
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), padding='same'))
    model.add(BatchNormalization())
    
     #C3
    model.add(Conv2D(64, (3, 3), strides=( 1,1), padding='same',  activation='relu'))
    model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), padding='same'))
    model.add(BatchNormalization())
    model.add(Dense(64, input_dim=64,kernel_regularizer=regularizers.l2(0.01)))
    
    model.add(Flatten())
    model.add(Dropout(0.5))
    model.add(Dense(256, activation='sigmoid'))
    model.add(Dropout(0.5))
    model.add(Dense(2, activation='softmax'))  
  
    opt_adam = keras.optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy']) 
    return model

def getFilesPathWithoutSeizure(indexSeizure, indexPat):
    filesPath=[]
    print(indexSeizure)
    print(indexPat)
    for i in range(0, nSeizure):
        if(i==indexSeizure):
            filesPath.extend(interictalSpectograms[i])
            filesPath.extend(preictalSpectograms[i])
    shuffle(filesPath)
    return filesPath

def generate_arrays_for_training(indexPat, paths, start=0, end=100):
    while True:
        from_=int(len(paths)/100*start)
        to_=int(len(paths)/100*end)
        for i in range(from_, int(to_)):
            f=paths[i]
            x = np.load(PathSpectogramFolder+f)
            x = np.expand_dims(np.expand_dims(x, axis=0), axis = 0)
            x = x.transpose(0, 2, 3, 1)
            if('P' in f):
                y = np.repeat([[0,1]],x.shape[0], axis=0)
            else:
                y =np.repeat([[1,0]],x.shape[0], axis=0)
            yield(x,y)
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75),#It take the first 75%
                                validation_data=generate_arrays_for_training(indexPat, filesPath, start=75), #It take the last 25%
                                steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))),
                                validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
                                verbose=2,class_weight = {0:1, 1:1},
                                epochs=15, max_queue_size=2, shuffle=True)

2 个答案:

答案 0 :(得分:1)

您的模型过度拟合,无法正确概括。如果您的训练集与验证集完全不同(您将分别拆分75%和25%,但75%可能与25%完全不同),那么您的模型将很难推广。

在分为训练和验证之前,请先对数据进行混洗。那应该可以改善您的结果。

答案 1 :(得分:1)

您似乎已经在函数getFilesPathWithoutSeizure()中实现了改组,尽管您可以通过多次打印出文件名来验证改组是否确实有效。

filesPath=getFilesPathWithoutSeizure(i, indexPat)-i是否正在更新?

根据方法if(i==indexSeizure):中的代码getFilesPathWithoutSeizure,当indexSeizure等于计数器(i循环的for)时,仅返回1个文件

如果您不更改在函数调用期间传递的i参数,则可能意味着仅将1个文件返回到filePath变量,而整个训练只针对1个输入数据3467个文件的75%中。

-

在确认改组有效并且您的函数调用将所有数据插入filePath变量之后,它仍然不能解决您的问题,然后尝试以下操作:

数据增强可以通过应用随机但现实的变换(例如图像旋转,剪切,水平和垂直翻转,缩放,偏心等)来增加数据集的多样性,从而帮助解决过拟合问题。

但更重要的是,您需要手动查看数据并了解训练数据的相似性。

另一种选择是获取更多不同的数据进行训练。