为什么在训练中准确率没有增加,但 loss 和 val_loss 会减少?

时间:2021-01-15 12:12:57

标签: python tensorflow machine-learning keras neural-network

我声明我完全不熟悉神经网络,这是我第一次尝试开发神经网络。 问题在于根据上个月预测一周的污染预报。 具有 15 个特征的非结构化数据是: Start data

要预测的数据是'gas',下周一共168小时,是一周的小时数。 MinMaxScaler(feature_range (0,1)) 应用于数据。然后将数据拆分为训练数据和测试数据。由于只有一年的每小时测量可用,因此数据以 672 个每小时样本的系列重新采样,每个样本从一年中的每一天午夜开始。因此,从大约 8000 个开始的每小时调查中,获得了大约 600 个系列的 672 个样本。 'date' 从初始数据中移除,train_xtrain_y 的形式为: Shape of train_x and train_y

train_x[0] 中,数据集的前 4 周有 672 小时读数,包括所有特征,包括 'gas'。 另一方面,在 train_y [0] 中,从月份在 train_x [0] 结束时开始的下一周有 168 小时读数。 Train_X[0] where column 0 is 'gas'Train_y[0] with only gas column for the next week after train_x[0]

火车 X 形状 = (631, 672, 14)

训练 Y 形 = (631, 168, 1)

以这种方式组织数据后(如果有错请告诉我),我构建的神经网络如下:

    train_x, train_y = to_supervised(train, n_input)
    train_x = train_x.astype(float)
    train_y = train_y.astype(float)
    # define parameters
     verbose, epochs, batch_size = 1, 200, 50
    n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]
    # define model
    model = Sequential()
    opt = optimizers.RMSprop(learning_rate=1e-3)
    model.add(layers.GRU(14, activation='relu', input_shape=(n_timesteps, n_features),return_sequences=False, stateful=False))
    model.add(layers.Dense(1, activation='relu'))
    #model.add(layers.Dense(14, activation='linear'))
    model.add(layers.Dense(n_outputs, activation='sigmoid'))
    model.summary()
    model.compile(loss='mse', optimizer=opt, metrics=['accuracy'])

    train_y = np.concatenate(train_y).reshape(len(train_y), 168)

    callback_early_stopping = EarlyStopping(monitor='val_loss',
                                            patience=5, verbose=1)
    callback_tensorboard = TensorBoard(log_dir='./23_logs/',
                                       histogram_freq=0,
                                       write_graph=False)
    callback_reduce_lr = ReduceLROnPlateau(monitor='val_loss',
                                           factor=0.1,
                                           min_lr=1e-4,
                                           patience=0,
                                           verbose=1)
    callbacks = [callback_early_stopping,
                 callback_tensorboard,
                 callback_reduce_lr]
    history = model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose, shuffle=False
                        , validation_split=0.2, callbacks=callbacks)

当我适应网络时,我得到:

    11/11 [==============================] - 5s 305ms/step - loss: 0.1625 - accuracy: 0.0207 - val_loss: 0.1905 - val_accuracy: 0.0157
    Epoch 2/200
    11/11 [==============================] - 2s 179ms/step - loss: 0.1594 - accuracy: 0.0037 - val_loss: 0.1879 - val_accuracy: 0.0157
    Epoch 3/200
    11/11 [==============================] - 2s 169ms/step - loss: 0.1571 - accuracy: 0.0040 - val_loss: 0.1855 - val_accuracy: 0.0079
    Epoch 4/200
    11/11 [==============================] - 2s 165ms/step - loss: 0.1550 - accuracy: 0.0092 - val_loss: 0.1832 - val_accuracy: 0.0079
    Epoch 5/200
    11/11 [==============================] - 2s 162ms/step - loss: 0.1529 - accuracy: 0.0102 - val_loss: 0.1809 - val_accuracy: 0.0079
    Epoch 6/200
    11/11 [==============================] - 2s 160ms/step - loss: 0.1508 - accuracy: 0.0085 - val_loss: 0.1786 - val_accuracy: 0.0079
    Epoch 7/200
    11/11 [==============================] - 2s 160ms/step - loss: 0.1487 - accuracy: 0.0023 - val_loss: 0.1763 - val_accuracy: 0.0079
    Epoch 8/200
    11/11 [==============================] - 2s 158ms/step - loss: 0.1467 - accuracy: 0.0023 - val_loss: 0.1740 - val_accuracy: 0.0079
    Epoch 9/200
    11/11 [==============================] - 2s 159ms/step - loss: 0.1446 - accuracy: 0.0034 - val_loss: 0.1718 - val_accuracy: 0.0000e+00
    Epoch 10/200
    11/11 [==============================] - 2s 160ms/step - loss: 0.1426 - accuracy: 0.0034 - val_loss: 0.1695 - val_accuracy: 0.0000e+00
    Epoch 11/200
    11/11 [==============================] - 2s 162ms/step - loss: 0.1406 - accuracy: 0.0034 - val_loss: 0.1673 - val_accuracy: 0.0000e+00
    Epoch 12/200
    11/11 [==============================] - 2s 159ms/step - loss: 0.1387 - accuracy: 0.0034 - val_loss: 0.1651 - val_accuracy: 0.0000e+00
    Epoch 13/200
    11/11 [==============================] - 2s 159ms/step - loss: 0.1367 - accuracy: 0.0052 - val_loss: 0.1629 - val_accuracy: 0.0000e+00
    Epoch 14/200
    11/11 [==============================] - 2s 159ms/step - loss: 0.1348 - accuracy: 0.0052 - val_loss: 0.1608 - val_accuracy: 0.0000e+00
    Epoch 15/200
    11/11 [==============================] - 2s 161ms/step - loss: 0.1328 - accuracy: 0.0052 - val_loss: 0.1586 - val_accuracy: 0.0000e+00
    Epoch 16/200
    11/11 [==============================] - 2s 162ms/step - loss: 0.1309 - accuracy: 0.0052 - val_loss: 0.1565 - val_accuracy: 0.0000e+00
    Epoch 17/200
    11/11 [==============================] - 2s 171ms/step - loss: 0.1290 - accuracy: 0.0052 - val_loss: 0.1544 - val_accuracy: 0.0000e+00
    Epoch 18/200
    11/11 [==============================] - 2s 174ms/step - loss: 0.1271 - accuracy: 0.0052 - val_loss: 0.1523 - val_accuracy: 0.0000e+00
    Epoch 19/200
    11/11 [==============================] - 2s 161ms/step - loss: 0.1253 - accuracy: 0.0052 - val_loss: 0.1502 - val_accuracy: 0.0000e+00
    Epoch 20/200
    11/11 [==============================] - 2s 161ms/step - loss: 0.1234 - accuracy: 0.0052 - val_loss: 0.1482 - val_accuracy: 0.0000e+00
    Epoch 21/200
    11/11 [==============================] - 2s 159ms/step - loss: 0.1216 - accuracy: 0.0052 - val_loss: 0.1461 - val_accuracy: 0.0000e+00
    Epoch 22/200
    11/11 [==============================] - 2s 164ms/step - loss: 0.1198 - accuracy: 0.0052 - val_loss: 0.1441 - val_accuracy: 0.0000e+00
    Epoch 23/200
    11/11 [==============================] - 2s 164ms/step - loss: 0.1180 - accuracy: 0.0052 - val_loss: 0.1421 - val_accuracy: 0.0000e+00
    Epoch 24/200
    11/11 [==============================] - 2s 163ms/step - loss: 0.1162 - accuracy: 0.0052 - val_loss: 0.1401 - val_accuracy: 0.0000e+00
    Epoch 25/200
    11/11 [==============================] - 2s 167ms/step - loss: 0.1145 - accuracy: 0.0052 - val_loss: 0.1381 - val_accuracy: 0.0000e+00
    Epoch 26/200
    11/11 [==============================] - 2s 188ms/step - loss: 0.1127 - accuracy: 0.0052 - val_loss: 0.1361 - val_accuracy: 0.0000e+00
    Epoch 27/200
    11/11 [==============================] - 2s 169ms/step - loss: 0.1110 - accuracy: 0.0052 - val_loss: 0.1342 - val_accuracy: 0.0000e+00
    Epoch 28/200
    11/11 [==============================] - 2s 189ms/step - loss: 0.1093 - accuracy: 0.0052 - val_loss: 0.1323 - val_accuracy: 0.0000e+00
    Epoch 29/200
    11/11 [==============================] - 2s 183ms/step - loss: 0.1076 - accuracy: 0.0079 - val_loss: 0.1304 - val_accuracy: 0.0000e+00
    Epoch 30/200
    11/11 [==============================] - 2s 172ms/step - loss: 0.1059 - accuracy: 0.0079 - val_loss: 0.1285 - val_accuracy: 0.0000e+00
    Epoch 31/200
    11/11 [==============================] - 2s 164ms/step - loss: 0.1042 - accuracy: 0.0079 - val_loss: 0.1266 - val_accuracy: 0.0000e+00
    Epoch 32/200

准确度始终很低,有时(如本例)val_accuracy 变为 0 并且永远不会改变。而 lossval_loss 并没有很好地收敛而是减小。我意识到我肯定做错了很多事情,我无法理解如何解决它。我显然尝试过其他超参数以及其他网络(如 LSTM),但没有得到满意的结果。

我怎样才能改进模型以使准确度至少是不错的?欢迎任何建议,非常感谢!

0 个答案:

没有答案