神经网络验证的准确性有时不会改变

时间:2019-08-08 08:07:02

标签: python machine-learning keras neural-network deep-learning

我正在使用神经网络解决二进制分类问题,但是遇到了一些麻烦。有时,在运行模型时,我的验证准确性根本不会改变,而有时效果很好。我的数据集有1200个具有28个特征的样本,我有一个类不平衡(200个类,一个1000个类,b类)。我所有的特征都已归一化,介于1到0之间。正如我之前所说,这个问题并不总是发生,但我想知道为什么并修复它

我尝试更改优化功能和激活功能,但这对我没有好处。我还注意到,当我增加网络中神经元的数量时,此问题发生的频率降低了,但并没有得到解决。我也尝试增加了时期的数量,但有时仍会出现问题

model = Sequential()
model.add(Dense(28, input_dim=28,kernel_initializer='normal', activation='sigmoid'))
model.add(Dense(200, kernel_initializer='normal',activation='sigmoid')) 
model.add(Dropout(0.5))
model.add(Dense(300, kernel_initializer='normal',activation='sigmoid')) 
model.add(Dropout(0.5))
model.add(Dense(300, kernel_initializer='normal',activation='sigmoid')) 
model.add(Dropout(0.5))
model.add(Dense(150, kernel_initializer='normal',activation='sigmoid')) 
model.add(Dropout(0.4))
model.add(Dense(1,kernel_initializer='normal'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit(X_train, y_train,
                    epochs=34,
                    batch_size=32,
                    validation_data=(X_val, y_val),
                    verbose=1)

这是有时候训练模型会得到的结果

Epoch 1/34
788/788 [==============================] - 1s 2ms/step - loss: 1.5705 - acc: 0.6865 - val_loss: 0.6346 - val_acc: 0.7783
Epoch 2/34
788/788 [==============================] - 0s 211us/step - loss: 1.0262 - acc: 0.6231 - val_loss: 0.5310 - val_acc: 0.7783
Epoch 3/34
788/788 [==============================] - 0s 194us/step - loss: 1.7575 - acc: 0.7221 - val_loss: 0.5431 - val_acc: 0.7783
Epoch 4/34
788/788 [==============================] - 0s 218us/step - loss: 0.9113 - acc: 0.5774 - val_loss: 0.5685 - val_acc: 0.7783
Epoch 5/34
788/788 [==============================] - 0s 199us/step - loss: 1.0987 - acc: 0.6688 - val_loss: 0.6435 - val_acc: 0.7783
Epoch 6/34
788/788 [==============================] - 0s 201us/step - loss: 0.9777 - acc: 0.5343 - val_loss: 0.5643 - val_acc: 0.7783
Epoch 7/34
788/788 [==============================] - 0s 204us/step - loss: 1.0603 - acc: 0.5914 - val_loss: 0.6266 - val_acc: 0.7783
Epoch 8/34
788/788 [==============================] - 0s 197us/step - loss: 0.7580 - acc: 0.5939 - val_loss: 0.6615 - val_acc: 0.7783
Epoch 9/34
788/788 [==============================] - 0s 206us/step - loss: 0.8950 - acc: 0.6650 - val_loss: 0.5291 - val_acc: 0.7783
Epoch 10/34
788/788 [==============================] - 0s 230us/step - loss: 0.8114 - acc: 0.6701 - val_loss: 0.5428 - val_acc: 0.7783
Epoch 11/34
788/788 [==============================] - 0s 281us/step - loss: 0.7235 - acc: 0.6624 - val_loss: 0.5275 - val_acc: 0.7783
Epoch 12/34
788/788 [==============================] - 0s 264us/step - loss: 0.7237 - acc: 0.6485 - val_loss: 0.5473 - val_acc: 0.7783
Epoch 13/34
788/788 [==============================] - 0s 213us/step - loss: 0.6902 - acc: 0.7056 - val_loss: 0.5265 - val_acc: 0.7783
Epoch 14/34
788/788 [==============================] - 0s 217us/step - loss: 0.6726 - acc: 0.7145 - val_loss: 0.5285 - val_acc: 0.7783
Epoch 15/34
788/788 [==============================] - 0s 197us/step - loss: 0.6656 - acc: 0.7132 - val_loss: 0.5354 - val_acc: 0.7783
Epoch 16/34
788/788 [==============================] - 0s 216us/step - loss: 0.6083 - acc: 0.7259 - val_loss: 0.5262 - val_acc: 0.7783
Epoch 17/34
788/788 [==============================] - 0s 218us/step - loss: 0.6188 - acc: 0.7310 - val_loss: 0.5271 - val_acc: 0.7783
Epoch 18/34
788/788 [==============================] - 0s 210us/step - loss: 0.6642 - acc: 0.6142 - val_loss: 0.5676 - val_acc: 0.7783
Epoch 19/34
788/788 [==============================] - 0s 200us/step - loss: 0.6017 - acc: 0.7221 - val_loss: 0.5256 - val_acc: 0.7783
Epoch 20/34
788/788 [==============================] - 0s 209us/step - loss: 0.6188 - acc: 0.7157 - val_loss: 0.8090 - val_acc: 0.2217
Epoch 21/34
788/788 [==============================] - 0s 201us/step - loss: 1.1724 - acc: 0.4061 - val_loss: 0.5448 - val_acc: 0.7783
Epoch 22/34
788/788 [==============================] - 0s 205us/step - loss: 0.5724 - acc: 0.7424 - val_loss: 0.5293 - val_acc: 0.7783
Epoch 23/34
788/788 [==============================] - 0s 234us/step - loss: 0.5829 - acc: 0.7538 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 24/34
788/788 [==============================] - 0s 209us/step - loss: 0.5815 - acc: 0.7525 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 25/34
788/788 [==============================] - 0s 220us/step - loss: 0.5688 - acc: 0.7576 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 26/34
788/788 [==============================] - 0s 210us/step - loss: 0.5715 - acc: 0.7525 - val_loss: 0.5273 - val_acc: 0.7783
Epoch 27/34
788/788 [==============================] - 0s 206us/step - loss: 0.5584 - acc: 0.7576 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 28/34
788/788 [==============================] - 0s 215us/step - loss: 0.5728 - acc: 0.7563 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 29/34
788/788 [==============================] - 0s 281us/step - loss: 0.5735 - acc: 0.7576 - val_loss: 0.5275 - val_acc: 0.7783
Epoch 30/34
788/788 [==============================] - 0s 272us/step - loss: 0.5773 - acc: 0.7614 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 31/34
788/788 [==============================] - 0s 225us/step - loss: 0.5847 - acc: 0.7525 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 32/34
788/788 [==============================] - 0s 239us/step - loss: 0.5739 - acc: 0.7551 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 33/34
788/788 [==============================] - 0s 216us/step - loss: 0.5632 - acc: 0.7525 - val_loss: 0.5269 - val_acc: 0.7783
Epoch 34/34
788/788 [==============================] - 0s 240us/step - loss: 0.5672 - acc: 0.7576 - val_loss: 0.5267 - val_acc: 0.7783

1 个答案:

答案 0 :(得分:5)

鉴于您报告的班级不平衡,您的模型似乎没有学习任何东西(报告的准确性似乎与仅以多数班为准就可以预测一切一致)。尽管如此,您的代码还是有问题的。对于初学者:

  1. 将输出层以外的所有激活功能替换为activation = 'relu'
  2. 在您的最后一层activation='sigmoid'中添加S型激活功能;照原样,您的网络是回归网络(最后一层是默认的线性输出),不是是分类网络。
  3. 从您所有的层中删除所有kernel_initializer='normal'个参数,即将其留给default一个kernel_initializer='glorot_uniform',这可以达到(更好)的性能。

此外,不清楚为什么要使用28个单位的输入密集层-不。这里的单位数与输入尺寸无关;请参阅Keras Sequential model input layer

默认情况下,Dropout不应进入网络-请先尝试不使用它,然后根据需要添加。

总而言之,这是模型寻找入门者的方式:

model = Sequential()
model.add(Dense(200, input_dim=28, activation='relu')) 
# model.add(Dropout(0.5))
model.add(Dense(300, activation='relu')) 
# model.add(Dropout(0.5))
model.add(Dense(300, activation='relu')) 
# model.add(Dropout(0.5))
model.add(Dense(150, activation='relu')) 
# model.add(Dropout(0.4))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])

,如上所述,根据您的实验结果取消注释/调整辍学层。