张量流损失:NaN;精度:0.1

时间:2020-05-20 18:47:35

标签: python tensorflow keras

〜40000张照片后出现NaN损失。使用我自己的数据集。所有图像都是相似的(27x48; 1位)Example

有100.000张图像用于学习,而40.000张图像用于验证。 我不知道为什么会这样反应。

模型创建代码:

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(48, 27, 1)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('sigmoid'))
model.compile(loss="categorical_crossentropy", optimizer="adam",
              metrics=["accuracy"])

学习代码:

datagen = ImageDataGenerator()
dirTrain = "/content/GeneratedI/train"
train_data = datagen.flow_from_directory(dirTrain, target_size=(48, 27), batch_size=15,
                                         class_mode="categorical", color_mode="grayscale")
dirVal = "/content/GeneratedI/val"
validation_data = datagen.flow_from_directory(dirVal, target_size=(48, 27), batch_size=15,
                                              class_mode="categorical", color_mode="grayscale")
print("Training the network...")
t_start = time.time()
history = model.fit_generator(train_data,
                              steps_per_epoch=100000 // 15,
                              epochs=15,
                              validation_data=validation_data,
                              validation_steps=40000 // 15)
print(time.time() - t_start)

输出:

Found 100000 images belonging to 10 classes.
Found 40000 images belonging to 10 classes.
Training the network...
Epoch 1/9
6666/6666 [==============================] - 176s 26ms/step - loss: 0.3099 - accuracy: 0.8985 - val_loss: 0.0268 - val_accuracy: 0.9906
Epoch 2/9
6666/6666 [==============================] - 171s 26ms/step - loss: 0.0470 - accuracy: 0.9851 - val_loss: 0.0150 - val_accuracy: 0.9958
Epoch 3/9
6666/6666 [==============================] - 170s 26ms/step - loss: 0.0336 - accuracy: 0.9900 - val_loss: 0.0112 - val_accuracy: 0.9968
Epoch 4/9
6666/6666 [==============================] - 171s 26ms/step - loss: 0.0283 - accuracy: 0.9918 - val_loss: 0.0104 - val_accuracy: 0.9971
Epoch 5/9
6666/6666 [==============================] - 173s 26ms/step - loss: 0.0269 - accuracy: 0.9928 - val_loss: 0.0055 - val_accuracy: 0.9988
Epoch 6/9
6666/6666 [==============================] - 170s 25ms/step - loss: 0.0266 - accuracy: 0.9938 - val_loss: 0.0035 - val_accuracy: 0.9992
Epoch 7/9
6666/6666 [==============================] - 171s 26ms/step - loss: nan - accuracy: 0.2285 - val_loss: nan - val_accuracy: 0.1000
Epoch 8/9
6666/6666 [==============================] - 175s 26ms/step - loss: nan - accuracy: 0.1000 - val_loss: nan - val_accuracy: 0.1000
Epoch 9/9
6666/6666 [==============================] - 171s 26ms/step - loss: nan - accuracy: 0.1000 - val_loss: nan - val_accuracy: 0.1000

P.S。我做了9个纪元,没有浪费我的时间,只是在这里显示错误

1 个答案:

答案 0 :(得分:0)

将“输出”层的激活更改为SoftMax,它可以工作!