如何通过辍学层改善神经网络?

时间:2020-05-09 21:57:53

标签: python tensorflow keras neural-network dropout

我正在研究可预测心脏病的神经网络。数据来自kaggle,并且已经过预处理。我使用了各种模型,例如逻辑回归,随机森林和SVM,这些模型都能产生可靠的结果。我正在尝试将相同的数据用于神经网络,以查看NN是否能胜过其他ML模型(数据集很小,这可能解释了不良结果)。下面是我的网络代码。下面的模型产生了50%的准确度,很明显,它太低而无法使用。据您所知,有什么看起来会破坏模型的准确性吗?

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from tensorflow.keras.layers import Dense, Dropout
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import EarlyStopping

df = pd.read_csv(r"C:\Users\***\Desktop\heart.csv")

X = df[['age','sex','cp','trestbps','chol','fbs','restecg','thalach']].values
y = df['target'].values

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)

from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()

scaler.fit_transform(X_train)
scaler.transform(X_test)


nn = tf.keras.Sequential()

nn.add(Dense(30, activation='relu'))

nn.add(Dropout(0.2))

nn.add(Dense(15, activation='relu'))

nn.add(Dropout(0.2))


nn.add(Dense(1, activation='sigmoid'))


nn.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics= 
 ['accuracy'])


early_stop = EarlyStopping(monitor='val_loss',mode='min', verbose=1, 
patience=25)

nn.fit(X_train, y_train, epochs = 1000, validation_data=(X_test, y_test),
     callbacks=[early_stop])

model_loss = pd.DataFrame(nn.history.history)
model_loss.plot()

predictions = nn.predict_classes(X_test)

from sklearn.metrics import classification_report,confusion_matrix

print(classification_report(y_test,predictions))
print(confusion_matrix(y_test,predictions))

2 个答案:

答案 0 :(得分:1)

使用EarlyStopping运行模型后,

Epoch 324/1000
23/23 [==============================] - 0s 3ms/step - loss: 0.5051 - accuracy: 0.7364 - val_loss: 0.4402 - val_accuracy: 0.8182
Epoch 325/1000
23/23 [==============================] - 0s 3ms/step - loss: 0.4716 - accuracy: 0.7643 - val_loss: 0.4366 - val_accuracy: 0.7922
Epoch 00325: early stopping
WARNING:tensorflow:From <ipython-input-54-2ee8517852a8>:54: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
              precision    recall  f1-score   support

           0       0.90      0.66      0.76       154
           1       0.73      0.93      0.82       154

    accuracy                           0.79       308
   macro avg       0.82      0.79      0.79       308
weighted avg       0.82      0.79      0.79       308

通过这样一个简单的MLP,就可以得出合理的准确度和f1-分数。

enter image description here

我使用了以下数据集:https://www.kaggle.com/abdulhakimrony/heartcsv/data

  1. 对所有时期进行训练,初始精度可能较低,但是在几个时期之后模型很快就会收敛。

  2. 随机使用seed,张量流和numpy每次都可获得可重现的结果。

  3. 如果简单模型显示出较高的准确性,则NN的性能可能会胜过,但您必须确保NN不会过拟合。

  4. 检查您的数据是否不平衡,如果是,请尝试使用class_weights

  5. 您可以尝试进行交叉验证的tuner以获得最佳性能的模型。

答案 1 :(得分:1)

洁牙机不在位;您需要保存缩放结果。

X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

然后,您将获得与预期更一致的结果。

              precision    recall  f1-score   support

           0       0.93      0.98      0.95       144
           1       0.98      0.93      0.96       164

    accuracy                           0.95       308
   macro avg       0.95      0.96      0.95       308
weighted avg       0.96      0.95      0.95       308
相关问题