获得更高的损耗和val_loss的原因是什么?

时间:2020-05-01 17:32:31

标签: python neural-network

我已经编码了神经网络。

数据集就像:

aqi timestamp   junctionid  Dates   Time    Day

0   94  2014-08-01 00:05:00 1   2014-08-01  5   0
1   90  2014-08-01 00:10:00 1   2014-08-01  10  0
2   85  2014-08-01 00:15:00 1   2014-08-01  15  0
3   85  2014-08-01 00:20:00 1   2014-08-01  20  0
4   84  2014-08-01 00:25:00 1   2014-08-01  25  0
... ... ... ... ... ... ...
95  95  2014-08-01 08:00:00 1   2014-08-01  480 0
96  90  2014-08-01 08:05:00 1   2014-08-01  485 0
97  91  2014-08-01 08:10:00 1   2014-08-01  490 0
98  92  2014-08-01 08:15:00 1   2014-08-01  495 0
99  96  2014-08-01 08:20:00 1   2014-08-01  500 0

代码:

    Y = dataset[['aqi']]
    X = dataset[['Day','Time','junctionid']]
    X_scale =X

    # normalize the dataset
    scaler = MinMaxScaler(feature_range=(0, 0.99))
    Y = scaler.fit_transform(Y['aqi'].values.reshape(-1, 1))

    from sklearn import preprocessing
    from sklearn.model_selection import train_test_split
    X_train, X_val_and_test, Y_train, Y_val_and_test = train_test_split(X_scale, Y, test_size=0.3)
    X_val, X_test, Y_val, Y_test = train_test_split(X_val_and_test, Y_val_and_test, test_size=0.5)
    print(X_train.shape, X_val.shape, X_test.shape, Y_train.shape, Y_val.shape, Y_test.shape)

    from keras.models import Sequential
    from keras.layers import Dense

    model = Sequential([
        Dense(32, activation='relu', input_shape=(3,)),
        Dense(32, activation='relu'),
        Dense(1, activation='sigmoid'),
    ])

    history = model.compile(loss='mean_squared_error', optimizer='adam')

    hist = model.fit(X_train, Y_train,
              batch_size=70, epochs=5,
              validation_data=(X_val, Y_val))

亏损的原因是什么:0.0369-损失val:0.0368?

1 个答案:

答案 0 :(得分:0)

损失将是训练数据集中所有数据之间的均方误差。验证数据集是不同的,不用作训练的一部分。验证是当您在训练集中未包含的数据上测试网络时,以确保网络能够归纳为所有数据并且不会过度适合训练集。

在预测验证数据时,网络权重可能已优化到更精确的程度。这是偶然的,并且差异通常很小,在您的示例中。