Validation loss is inconsistent if I predict same results and calculate loss afterwards

时间:2019-04-16 22:48:59

标签: tensorflow keras neural-network deep-learning loss

I have an LSTM model that predicts weather. When I run the model with model.fit, it gives around %20 MAPE.

When I try to predict the same data that given to model.fit, when I calculate the loss, it results %60 MAPE. What might be causing this difference? I would have ignored it but the difference is too much.

Here is my code in main:

#preparing the data and building the model first
regressor.fit(x_train, y_train, epochs = 100, batch_size = 32, 
validation_data = (x_test, y_test))
results = regressor.predict(x_test)
print(bm.mean_absolute_percentage_error(y_test, results))

in bm:

def mean_absolute_percentage_error(real, est):
"""Calculates the mean absolute precentage error.
"""
sess = Session()
with sess.as_default():
    tensor = losses.mean_absolute_percentage_error(real, est)
    return tensor.eval()[-1]

I used the same function that keras uses for calculating MAPE. Even if I made a mistake when preparing test data, they both should be consistently wrong because they take the same set as argument.

0 个答案:

没有答案