每个时期的Keras损失都在增加

时间:2019-04-01 22:03:40

标签: tensorflow keras

我正在使用Keras进行深度学习。我有3堂课的1860个样本。培训期间的损失正在增加。我已经删除了辍学。

模型

model = models.Sequential()
model.add(layers.Conv2D(128, (3, 3), input_shape=(480, 640, 3), use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(256, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(256, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(256, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(512, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(3, activation='softmax'))

生成器

model.compile(loss='categorical_crossentropy',
 optimizer=optimizers.sgd(),
 metrics=['acc'])

validation_dir = r'C:\Users\user\Desktop\validation_data'

train_data_generator = ImageDataGenerator(
 rescale=1. / 255,
 horizontal_flip=True)

validation_data_generator = ImageDataGenerator(rescale=1. / 255)

train_generator = train_data_generator.flow_from_directory(
 train_dir,
 target_size=(480, 640),
 batch_size=20,
 class_mode='categorical')

validation_generator = validation_data_generator.flow_from_directory(
 validation_dir,
 target_size=(480, 640),
 batch_size=30,
 class_mode='categorical')

checkpointer = ModelCheckpoint(
 filepath=checkpoint_path,
 verbose=1,
 save_best_only=True)

model.fit_generator(
 train_generator,
 steps_per_epoch=93,
 epochs=35,
 validation_data=validation_generator,
 validation_steps=6,
 callbacks=[checkpointer])

损失不断增加,培训时间非常长。我不确定我的宽度和高度是否很大。

0 个答案:

没有答案