恒定的验证准确性

时间:2019-08-05 15:42:24

标签: python machine-learning keras neural-network deep-learning

我正在研究分类问题,以检测轴承故障。我正在使用keras构建神经网络,但是在训练模型时,我注意到验证精度恒定为1。我不知道这意味着什么以及如何解决。

训练模型后,第一次,我的验证准确性高于训练准确性,测试准确性为81%。

model = Sequential()

model.add(Dense(11, activation='relu', input_shape=(11,)))

model.add(Dense(50, activation='relu'))
model.add(Dense(100, activation='relu'))

model.add(Dense(7, activation='softmax'))

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])


df=Read('all.csv')
#labels and features
x=df.drop(columns=['Label'])
y=to_categorical(df.Label)
#spliting to training and testing data
from sklearn.model_selection import train_test_split
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size = 0.2, random_state = 0)

#Training
history=model.fit(xTrain,yTrain, epochs=1000,validation_split=0.2,batch_size=28)

我得到了这些结果

Train on 5519 samples, validate on 1380 samples
Epoch 1/1000
5519/5519 [==============================] - 1s 140us/step - loss: 1.2792 - acc: 0.5238 - val_loss: 1.1580 - val_acc: 0.5565
Epoch 2/1000
5519/5519 [==============================] - 0s 46us/step - loss: 1.1297 - acc: 0.5392 - val_loss: 1.0965 - val_acc: 0.5290
Epoch 3/1000
5519/5519 [==============================] - 0s 46us/step - loss: 1.0824 - acc: 0.5438 - val_loss: 1.0479 - val_acc: 0.5609
Epoch 4/1000
5519/5519 [==============================] - 0s 46us/step - loss: 1.0526 - acc: 0.5474 - val_loss: 1.0157 - val_acc: 0.5420
Epoch 5/1000
5519/5519 [==============================] - 0s 46us/step - loss: 1.0222 - acc: 0.5635 - val_loss: 0.9980 - val_acc: 0.5580
Epoch 6/1000
5519/5519 [==============================] - 0s 46us/step - loss: 0.9908 - acc: 0.5811 - val_loss: 0.9750 - val_acc: 0.5964

我不知道val_acc高于acc是什么意思。 而且由于我的班级不平衡,我考虑使用SMOTE来平衡数据并提高准确性。但这导致我的验证准确性恒定且等于1。测试准确性提高了1%

model = Sequential()

model.add(Dense(11, activation='relu', input_shape=(11,)))

model.add(Dense(50, activation='relu'))
model.add(Dense(100, activation='relu'))

model.add(Dense(7, activation='softmax'))

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])


df=Read('all.csv')
#labels and features
x=df.drop(columns=['Label'])
y=to_categorical(df.Label)
#spliting to training and testing data
from sklearn.model_selection import train_test_split
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size = 0.2, random_state = 0)
#SMOTE
smote=SMOTE('minority')
xTrain,yTrain=smote.fit_sample(xTrain,yTrain)
#Standardizing
scaler = StandardScaler()
xTrain = scaler.fit_transform( xTrain )
xTest = scaler.transform( xTest)
#Training
history=model.fit(xTrain,yTrain, epochs=1000,validation_split=0.2,batch_size=28)
Train on 8399 samples, validate on 2100 samples
Epoch 1/1000
8399/8399 [==============================] - 0s 47us/step - loss: 2.6588 - acc: 0.6801 - val_loss: 0.0255 - val_acc: 0.9919
Epoch 2/1000
8399/8399 [==============================] - 0s 45us/step - loss: 0.7997 - acc: 0.7691 - val_loss: 3.2350e-05 - val_acc: 1.0000
Epoch 3/1000
8399/8399 [==============================] - 0s 51us/step - loss: 0.6365 - acc: 0.7882 - val_loss: 3.2050e-04 - val_acc: 1.0000
Epoch 4/1000
8399/8399 [==============================] - 0s 47us/step - loss: 0.5785 - acc: 0.7963 - val_loss: 0.0011 - val_acc: 0.9995
Epoch 5/1000
8399/8399 [==============================] - 0s 44us/step - loss: 0.5603 - acc: 0.8018 - val_loss: 1.5631e-07 - val_acc: 1.0000
Epoch 6/1000
8399/8399 [==============================] - 0s 44us/step - loss: 0.5332 - acc: 0.8081 - val_loss: 2.3595e-06 - val_acc: 1.0000


0 个答案:

没有答案