验证准确性随着每个时代而不断提高(喀拉斯邦的密集神经网络)

时间:2019-08-21 13:58:12

标签: python machine-learning keras neural-network

我想用sklearn的乳腺癌数据集在keras中编写一个简单的密集神经网络。

但是,当我绘制训练和验证准确性以及损失在各个时期时,验证准确性会不断提高,而验证损失会不断减少,实际上它应该达到峰值,然后偏离训练值。

图:https://imgur.com/a/Bch56BU

我尝试过处理层数和其他变量,但是问题仍然存在。 我在想,如果训练和验证数据在某种程度上相同,也许会是这种情况?但是我的代码找不到问题。

有人知道如何解决此问题吗?谢谢!

from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
from keras import models, layers#, regularizers
import matplotlib.pyplot as plt

dataset = load_breast_cancer()
x = dataset['data']
y = dataset['target_names'].take(dataset['target'])
for index, item in enumerate(y):
    if (item == 'malignant'):
        y[index] = 1.
    else:
        y[index] = 0.
x = scale(x)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.5)

model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(30,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop',
              loss='mse',
              metrics=['accuracy'])

x_val = x_train[:80]
partial_x_train = x_train[80:]
y_val = y_train[:80]
partial_y_train = y_train[80:]

history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=20,
                    batch_size=50,
                    validation_data=(x_val, y_val))

history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']

epochs = range(1, len(history_dict['acc']) + 1)

plt.plot(epochs, loss_values, 'bo', label='Training loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

plt.clf()
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']

plt.plot(epochs, acc_values, 'bo', label='Training acc')
plt.plot(epochs, val_acc_values, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Train on 204 samples, validate on 80 samples
Epoch 1/20
204/204 [==============================] - 2s 8ms/step - loss: 0.1684 - acc: 0.7941 - val_loss: 0.1735 - val_acc: 0.7750
Epoch 2/20
204/204 [==============================] - 0s 39us/step - loss: 0.1387 - acc: 0.8529 - val_loss: 0.1571 - val_acc: 0.8000
Epoch 3/20
204/204 [==============================] - 0s 39us/step - loss: 0.1253 - acc: 0.8775 - val_loss: 0.1452 - val_acc: 0.8500
Epoch 4/20
204/204 [==============================] - 0s 39us/step - loss: 0.1156 - acc: 0.8775 - val_loss: 0.1354 - val_acc: 0.8625
Epoch 5/20
204/204 [==============================] - 0s 39us/step - loss: 0.1074 - acc: 0.8775 - val_loss: 0.1263 - val_acc: 0.8625
Epoch 6/20
204/204 [==============================] - 0s 39us/step - loss: 0.1003 - acc: 0.8824 - val_loss: 0.1165 - val_acc: 0.8625
Epoch 7/20
204/204 [==============================] - 0s 39us/step - loss: 0.0929 - acc: 0.8922 - val_loss: 0.1111 - val_acc: 0.8750
Epoch 8/20
204/204 [==============================] - 0s 39us/step - loss: 0.0883 - acc: 0.8971 - val_loss: 0.1045 - val_acc: 0.8875
Epoch 9/20
204/204 [==============================] - 0s 39us/step - loss: 0.0836 - acc: 0.9069 - val_loss: 0.0985 - val_acc: 0.8875
Epoch 10/20
204/204 [==============================] - 0s 39us/step - loss: 0.0782 - acc: 0.9216 - val_loss: 0.0935 - val_acc: 0.8875
Epoch 11/20
204/204 [==============================] - 0s 39us/step - loss: 0.0738 - acc: 0.9314 - val_loss: 0.0890 - val_acc: 0.8875
Epoch 12/20
204/204 [==============================] - 0s 39us/step - loss: 0.0698 - acc: 0.9363 - val_loss: 0.0847 - val_acc: 0.8875
Epoch 13/20
204/204 [==============================] - 0s 39us/step - loss: 0.0664 - acc: 0.9412 - val_loss: 0.0812 - val_acc: 0.8875
Epoch 14/20
204/204 [==============================] - 0s 44us/step - loss: 0.0632 - acc: 0.9363 - val_loss: 0.0776 - val_acc: 0.9000
Epoch 15/20
204/204 [==============================] - 0s 34us/step - loss: 0.0604 - acc: 0.9461 - val_loss: 0.0742 - val_acc: 0.9000
Epoch 16/20
204/204 [==============================] - 0s 39us/step - loss: 0.0578 - acc: 0.9461 - val_loss: 0.0702 - val_acc: 0.9000
Epoch 17/20
204/204 [==============================] - 0s 39us/step - loss: 0.0549 - acc: 0.9510 - val_loss: 0.0672 - val_acc: 0.9000
Epoch 18/20
204/204 [==============================] - 0s 39us/step - loss: 0.0525 - acc: 0.9510 - val_loss: 0.0642 - val_acc: 0.9000
Epoch 19/20
204/204 [==============================] - 0s 39us/step - loss: 0.0500 - acc: 0.9559 - val_loss: 0.0599 - val_acc: 0.9125
Epoch 20/20
204/204 [==============================] - 0s 39us/step - loss: 0.0471 - acc: 0.9559 - val_loss: 0.0571 - val_acc: 0.9250

0 个答案:

没有答案