为什么val_loss太低而测试根均方误差却更高?

时间:2020-05-03 14:09:25

标签: python keras neural-network lstm query-performance

我正在训练LSTM,该数据集包含17568行,每5分钟为2个月的监视值。

型号为: '''

model = Sequential()
model.add(LSTM(300, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2]),return_sequences=True))
model.add(Dropout(0.1))
model.add(LSTM(300, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X_train, Y_train, epochs=100, batch_size=70, validation_data=(X_test, Y_test), 
                    callbacks=[EarlyStopping(monitor='val_loss',patience=10, verbose=1)], verbose=1, shuffle=False)
model.summary()

'''

用于识别RMSE的代码为: '''

train_predict = model.predict(X_train)
test_predict = model.predict(X_test)
# invert predictions
train_predict = scaler.inverse_transform(train_predict)
Y_train = scaler.inverse_transform([Y_train])
test_predict = scaler.inverse_transform(test_predict)
Y_test = scaler.inverse_transform([Y_test])
print('Train Mean Absolute Error:', mean_absolute_error(Y_train[0], train_predict[:,0]))
print('Train Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_train[0], train_predict[:,0])))
print('Test Mean Absolute Error:', mean_absolute_error(Y_test[0], test_predict[:,0]))
print('Test Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_test[0], test_predict[:,0])))

'''

现在我的问题是val_loss = 0.0017和损失= 0.0019

但是RMSE是: '''

Train Mean Absolute Error: 10.814174578676965
Train Root Mean Squared Error: 13.792484521895835
Test Mean Absolute Error: 8.059164253166095
Test Root Mean Squared Error: 10.6127240648618

''' 请帮助我了解我在哪里做错了? 我试图在最近三天内了解这一点。但是我不能。请救我的命

1 个答案:

答案 0 :(得分:0)

val_loss和损失是在训练SCALED目标时计算的,而mae和rmse是在INVERSESCALED目标后计算的,这是真实的表现