Conv2d Tensorflow 结果错误 - 准确度 = 0.0000e+00

时间:2021-02-17 04:11:59

标签: python keras tensorflow2.0

我正在使用 tensorflow 和 keras 进行分类构建分类模型。运行下面的代码时,似乎每个时期后输出似乎都没有收敛,损失稳步增加,准确度不断设置为 0.0000e+00。我是机器学习的新手,不太确定为什么会发生这种情况。

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.models import Sequential
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
import numpy as np

import time
import tensorflow as tf

from google.colab import drive

drive.mount('/content/drive')
import pandas as pd 
data = pd.read_csv("hmnist_28_28_RGB.csv") 
X = data.iloc[:, 0:-1]
y = data.iloc[:, -1]

X = X / 255.0
X = X.values.reshape(-1,28,28,3)
print(X.shape)

model = Sequential()
model.add(Conv2D(256, (3, 3), input_shape=X.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Conv2D(256, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors

model.add(Dense(64))

model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(X, y, batch_size=32, epochs=10, validation_split=0.3)

输出

(378, 28, 28, 3)
Epoch 1/10
9/9 [==============================] - 4s 429ms/step - loss: -34.6735 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 2/10
9/9 [==============================] - 4s 400ms/step - loss: -1074.2162 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 3/10
9/9 [==============================] - 4s 399ms/step - loss: -7446.1872 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 4/10
9/9 [==============================] - 4s 396ms/step - loss: -30012.9553 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 5/10
9/9 [==============================] - 4s 406ms/step - loss: -89006.4180 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 6/10
9/9 [==============================] - 4s 400ms/step - loss: -221087.9078 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 7/10
9/9 [==============================] - 4s 399ms/step - loss: -480032.9313 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 8/10
9/9 [==============================] - 4s 403ms/step - loss: -956052.3375 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 9/10
9/9 [==============================] - 4s 396ms/step - loss: -1733128.9000 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 10/10
9/9 [==============================] - 4s 401ms/step - loss: -2953626.5750 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00

1 个答案:

答案 0 :(得分:2)

您需要对模型进行几处更改才能使其正常工作。

数据集中有 7 个不同的标签,所以最后一层需要 7 个输出神经元。

对于您的最后一层,您当前使用的是 sigmoid 激活。这不适合多类分类。相反,您应该使用 softmax 激活。

作为损失函数,您使用的是 loss='binary_crossentropy'。这仅用于二进制分类。在您的情况下,由于您的标签由整数 loss='sparse_categorical_crossentropy' 组成,因此应该使用。您可以找到更多信息here

对代码的最后几行进行以下更改:

model.add(Dense(7))
model.add(Activation('softmax'))
model.compile(loss='sparse_categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(X, y, batch_size=32, epochs=10, validation_split=0.3)

您将获得此培训历史记录:

(10015, 28, 28, 3)
Epoch 1/10
220/220 [==============================] - 89s 403ms/step - loss: 1.0345 - accuracy: 0.6193 - val_loss: 1.7980 - val_accuracy: 0.4353
Epoch 2/10
220/220 [==============================] - 88s 398ms/step - loss: 0.8282 - accuracy: 0.6851 - val_loss: 3.3646 - val_accuracy: 0.0676
Epoch 3/10
220/220 [==============================] - 88s 399ms/step - loss: 0.6944 - accuracy: 0.7502 - val_loss: 2.9686 - val_accuracy: 0.1228
Epoch 4/10
220/220 [==============================] - 87s 395ms/step - loss: 0.6630 - accuracy: 0.7611 - val_loss: 3.3777 - val_accuracy: 0.0646
Epoch 5/10
220/220 [==============================] - 87s 396ms/step - loss: 0.5976 - accuracy: 0.7812 - val_loss: 2.3929 - val_accuracy: 0.2532
Epoch 6/10
220/220 [==============================] - 87s 396ms/step - loss: 0.5577 - accuracy: 0.7935 - val_loss: 2.9879 - val_accuracy: 0.2592
Epoch 7/10
220/220 [==============================] - 88s 398ms/step - loss: 0.7644 - accuracy: 0.7215 - val_loss: 2.5258 - val_accuracy: 0.2852
Epoch 8/10
220/220 [==============================] - 87s 395ms/step - loss: 0.5629 - accuracy: 0.7879 - val_loss: 2.6053 - val_accuracy: 0.3055
Epoch 9/10
220/220 [==============================] - 89s 404ms/step - loss: 0.5380 - accuracy: 0.8008 - val_loss: 2.7401 - val_accuracy: 0.1694
Epoch 10/10
220/220 [==============================] - 92s 419ms/step - loss: 0.5296 - accuracy: 0.8065 - val_loss: 3.7208 - val_accuracy: 0.0529

模型仍然需要优化以取得更好的结果,但总体来说是可行的。

我使用 this file 进行培训。