训练期间神经网络的损失和准确性保持恒定

时间:2019-12-07 05:08:19

标签: python neural-network

我用1个隐藏层构建nn。我将relu层用作隐藏层,将softmax用作输出层。 这是代码:

import numpy as np
import pandas as pd
from sklearn import metrics
from keras import layers
from keras.utils import np_utils
from keras.models import Sequential
from keras import optimizers

data = pd.read_csv("/kaggle/input/breast-cancer-wisconsin-data/data.csv")
data = np.array(data)
train=data[0:400]
validation=data[400:500]
test=data[500:569]

x_train = train[:,2:-2]
y_train = train[:,1]
y_train_digit=[0]*len(y_train)
for i in range(len(y_train)):
    if y_train[i]=="B":
        y_train_digit[i]=0
    else:
        y_train_digit[i]=1

y_train_digit= np.eye(2)[y_train_digit]

x_val= validation[:,2:-2]
y_val = validation[:,1]
y_val_digit=[0]*len(y_val)

for i in range(len(y_val)):
    if y_val[i]=="B":
        y_val_digit[i]=0
    else:
        y_val_digit[i]=1

y_val_digit=np.eye(2)[y_val_digit]

print(np.shape(x_train))
print(y_val_digit)


model = Sequential()
model.add(layers.Dense(10, activation = "relu", input_shape=(29,)))

model.add(layers.Dense(2, activation = "softmax"))
model.summary()

sgd = optimizers.SGD(lr=0.00001, decay=1e-6, momentum=0.9, nesterov=True)  
model.compile(loss='categorical_crossentropy',
              optimizer="sgd",
              metrics=['accuracy'])


model.fit( x_train, y_train_digit,
          batch_size=30,
          epochs=1000,
          verbose=1,
          validation_data=(x_val, y_val_digit))

但是在训练过程中,所有损失和准确性保持不变:

Epoch 81/1000
400/400 [==============================] - 0s 56us/step - loss: 0.6840 - accuracy: 0.5675 - val_loss: 0.6231 - val_accuracy: 0.7800
Epoch 82/1000
400/400 [==============================] - 0s 57us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6230 - val_accuracy: 0.7800
Epoch 83/1000
400/400 [==============================] - 0s 57us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6231 - val_accuracy: 0.7800
Epoch 84/1000
400/400 [==============================] - 0s 55us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6232 - val_accuracy: 0.7800
Epoch 85/1000
400/400 [==============================] - 0s 56us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6239 - val_accuracy: 0.7800
Epoch 86/1000
400/400 [==============================] - 0s 56us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6240 - val_accuracy: 0.7800
Epoch 87/1000
400/400 [==============================] - 0s 56us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6240 - val_accuracy: 0.7800
Epoch 88/1000
400/400 [==============================] - 0s 55us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6241 - val_accuracy: 0.7800

怎么了?为什么网络不学习?是因为损失函数吗?还是优化器?我认为学习率也很小。

0 个答案:

没有答案