训练和验证损失过早变得恒定

时间:2021-06-27 13:11:40

标签: python tensorflow machine-learning keras deep-learning

n_inputs 定义为-

n_inputs = X.shape[1] 

其值为 25

我的模型-

# define encoder
visible = Input(shape=(n_inputs,))
# encoder level 1

'''
e = Dense(400)(visible)
e = Dropout(0.05)(e)
e = ReLU()(e)

'''

# encoder level 2
e = Dense(300)(visible)
e = Dropout(0.05)(e)
e = ReLU()(e)

# encoder level 3
e = Dense(200)(visible)
e = Dropout(0.05)(e)
e = ReLU()(e)

# encoder level 4
e = Dense(100)(visible)
e = Dropout(0.05)(e)
e = ReLU()(e)

# encoder level 4
e = Dense(50)(visible)
e = Dropout(0.05)(e)
e = ReLU()(e)



# bottleneck
n_bottleneck = n_inputs
bottleneck = Dense(n_bottleneck)(e)


# define decoder, level 1
d = Dense(50)(bottleneck)
d = Dropout(0.05)(d)
d = ReLU()(d)

# define decoder, level 2
d = Dense(100)(bottleneck)
d = Dropout(0.05)(d)
d = ReLU()(d)

# define decoder, level 3
d = Dense(200)(bottleneck)
d = Dropout(0.05)(d)
d = ReLU()(d)

# define decoder, level 4
d = Dense(300)(bottleneck)
d = Dropout(0.05)(d)
d = ReLU()(d)

'''
# define decoder, level 4
d = Dense(400)(bottleneck)
d = Dropout(0.05)(d)
d = ReLU()(d)

'''


# output layer
output = Dense(n_inputs, activation='sigmoid')(d)
# define autoencoder model
model = Model(inputs=visible, outputs=output)
# compile autoencoder model
opt = keras.optimizers.Adam(lr=0.00001)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1,patience=10)
model.compile(optimizer=opt, loss='binary_crossentropy')
# plot the autoencoder
#plot_model(model, 'drive/MyDrive/autoencoder_no_compress.png', show_shapes=True)
# fit the autoencoder model to reconstruct input
history = model.fit(X_train_norm, X_train_norm, epochs=500, batch_size=64, verbose=2, validation_split=0.1,callbacks=[es])

我的损失图-

enter image description here

训练和验证损失很快就会变得恒定,并且在 20 个 epoch 之后也不会改变。我的模型对于数据来说是太复杂还是太简单以至于不会通过增加 epoch 来过度拟合?

我已经标准化了我的数据,改变了各种学习率,用回调运行了 500 个 epoch

0 个答案:

没有答案