LSTM 训练中期的损失值变化

时间:2021-03-25 02:27:29

标签: python keras neural-network lstm

我目前正在从事一个神经网络项目,我需要一些帮助来理解参数与我的神经网络输出的值之间的关系。

我的目标是训练一个 LSTM 神经网络来检测语音中的压力。我使用的数据集分为中性声音的音频和压力下声音的音频。为了对包含重音的音频进行分类,我从每一帧的语音中提取相关特征,然后将这些信息输入 LSTM 神经网络。

由于我是按帧提取特征,不同长度的音频文件的提取输出也有不同的长度,与音频持续时间成比例。为了规范神经网络输入,我使用了一种填充技术,该技术包括在每个提取的特征集的末尾添加零以满足最大的集大小。

因此,例如,如果我有 3 个音频文件,每个文件的持续时间为:4、5、6 秒,从前两个音频中提取的特征集将用零填充以满足第三个音频提取集的长度。

填充的特征集如下所示:

[
  [9.323346e+00, 9.222625e+00, 8.910659e+00],
  [8.751126e+00, 8.432300e+00, 8.046866e+00],
  ...
  [7.439109e+00, 7.380966e+00, 6.092496e+00],
  [0, 0, 0],
  [0, 0, 0],
  [0, 0, 0]
]

整个数据集维度如下:(音频文件数)x(最大音频文件中的帧数)x(特征数)

我将我的数据集划分为训练集、验证集和测试集。目前,我有来自两个公共数据库的音频文件,一组有 576 个音频文件(288 个无压力,288 个有压力),另一个有 240 个文件(120 个无压力,120 个有压力)。

以下代码显示了我使用 Keras 的 LSTM 实现:

N_HIDDEN_CELLS = 100
LEARNING_RATE = 0.00005
BATCH_SIZE = 32
EPOCHS_N = 30
ACTIVATION_FUNCTION = 'softmax'
LOSS_FUNCTION = 'binary_crossentropy'

def create_model(input_shape)
    model = keras.Sequential()

    model.add(keras.layers.LSTM(N_HIDDEN_CELLS, input_shape=input_shape, return_sequences=True))

    model.add(keras.layers.LSTM(N_HIDDEN_CELLS, return_sequences=True))

    model.add(keras.layers.LSTM(N_HIDDEN_CELLS, return_sequences=True))

    model.add(keras.layers.Dropout(0.3))

    model.add(keras.layers.LSTM(2, activation=ACTIVATION_FUNCTION))

    return model

def prepare_datasets(data, labels, test_size, validation_size):
    X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=test_size)
    X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=validation_size)

    return X_train, X_validation, X_test, y_train, y_validation, y_test

X_train, X_validation, X_test, y_train, y_validation, y_test = prepare_datasets(data, labels, 0.25, 0.2)

input_shape = (X_train.shape[1], X_train.shape[2])

model = create_model(input_shape)

optimizer = keras.optimizers.Adam(learning_rate=LEARNING_RATE)

model.compile(optimizer=optimizer, loss=LOSS_FUNCTION, metrics=['accuracy'])

model.summary()

history = model.fit(X_train, y_train, validation_data=(X_validation, y_validation), batch_size=BATCH_SIZE, epochs=EPOCHS_N)

test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)

经过各种测试和执行后,我对自己的网络性能不太有信心。起初,验证损失值到处都是,变化很大,根本没有收敛。通过对参数进行一些调整,我最终得到了上面代码中的值。尽管如此,我还是没有那么自信,主要是因为验证损失在第 15 个时期(或多或少)之后开始发生变化。在第一个 epoch 中,训练损失和验证损失都相应地下降,但在一些 epoch 之后,训练损失不断下降,验证损失开始变化并上升。

以下是同一网络(具有与提供的代码相同的参数)和同一数据集(具有 576 个音频文件的一个)的两次执行:

Epoch 1/30
11/11 [==============================] - 5s 194ms/step - loss: 0.8493 - accuracy: 0.4934 - val_loss: 0.8436 - val_accuracy: 0.4943
Epoch 2/30
11/11 [==============================] - 1s 123ms/step - loss: 0.8398 - accuracy: 0.5271 - val_loss: 0.8364 - val_accuracy: 0.4943
Epoch 3/30
11/11 [==============================] - 1s 124ms/step - loss: 0.8291 - accuracy: 0.6015 - val_loss: 0.8277 - val_accuracy: 0.4828
Epoch 4/30
11/11 [==============================] - 1s 128ms/step - loss: 0.8187 - accuracy: 0.6022 - val_loss: 0.8159 - val_accuracy: 0.5402
Epoch 5/30
11/11 [==============================] - 1s 124ms/step - loss: 0.8017 - accuracy: 0.6691 - val_loss: 0.8002 - val_accuracy: 0.5862
Epoch 6/30
11/11 [==============================] - 1s 123ms/step - loss: 0.7754 - accuracy: 0.7081 - val_loss: 0.7750 - val_accuracy: 0.6322
Epoch 7/30
11/11 [==============================] - 1s 124ms/step - loss: 0.7455 - accuracy: 0.7168 - val_loss: 0.7391 - val_accuracy: 0.6092
Epoch 8/30
11/11 [==============================] - 1s 130ms/step - loss: 0.7017 - accuracy: 0.7287 - val_loss: 0.6896 - val_accuracy: 0.6437
Epoch 9/30
11/11 [==============================] - 1s 125ms/step - loss: 0.6519 - accuracy: 0.7210 - val_loss: 0.6311 - val_accuracy: 0.6897
Epoch 10/30
11/11 [==============================] - 1s 129ms/step - loss: 0.5613 - accuracy: 0.7817 - val_loss: 0.5935 - val_accuracy: 0.7356
Epoch 11/30
11/11 [==============================] - 1s 123ms/step - loss: 0.5050 - accuracy: 0.7789 - val_loss: 0.5645 - val_accuracy: 0.7471
Epoch 12/30
11/11 [==============================] - 1s 123ms/step - loss: 0.4612 - accuracy: 0.8098 - val_loss: 0.5127 - val_accuracy: 0.7356
Epoch 13/30
11/11 [==============================] - 1s 127ms/step - loss: 0.4117 - accuracy: 0.8301 - val_loss: 0.4848 - val_accuracy: 0.7931
Epoch 14/30
11/11 [==============================] - 1s 128ms/step - loss: 0.3857 - accuracy: 0.8479 - val_loss: 0.4609 - val_accuracy: 0.7816
Epoch 15/30
11/11 [==============================] - 1s 122ms/step - loss: 0.3392 - accuracy: 0.8724 - val_loss: 0.4467 - val_accuracy: 0.8276
Epoch 16/30
11/11 [==============================] - 1s 118ms/step - loss: 0.3140 - accuracy: 0.8901 - val_loss: 0.4462 - val_accuracy: 0.8161
Epoch 17/30
11/11 [==============================] - 1s 125ms/step - loss: 0.2775 - accuracy: 0.9092 - val_loss: 0.4619 - val_accuracy: 0.8046
Epoch 18/30
11/11 [==============================] - 1s 128ms/step - loss: 0.2963 - accuracy: 0.8873 - val_loss: 0.3995 - val_accuracy: 0.8621
Epoch 19/30
11/11 [==============================] - 1s 122ms/step - loss: 0.2663 - accuracy: 0.9141 - val_loss: 0.4364 - val_accuracy: 0.8276
Epoch 20/30
11/11 [==============================] - 1s 120ms/step - loss: 0.2415 - accuracy: 0.9368 - val_loss: 0.4758 - val_accuracy: 0.8276
Epoch 21/30
11/11 [==============================] - 1s 121ms/step - loss: 0.2209 - accuracy: 0.9297 - val_loss: 0.3855 - val_accuracy: 0.8276
Epoch 22/30
11/11 [==============================] - 1s 121ms/step - loss: 0.1605 - accuracy: 0.9676 - val_loss: 0.3658 - val_accuracy: 0.8621
Epoch 23/30
11/11 [==============================] - 1s 126ms/step - loss: 0.1618 - accuracy: 0.9641 - val_loss: 0.3638 - val_accuracy: 0.8506
Epoch 24/30
11/11 [==============================] - 1s 129ms/step - loss: 0.1309 - accuracy: 0.9728 - val_loss: 0.4450 - val_accuracy: 0.8276
Epoch 25/30
11/11 [==============================] - 1s 125ms/step - loss: 0.2014 - accuracy: 0.9394 - val_loss: 0.3439 - val_accuracy: 0.8621
Epoch 26/30
11/11 [==============================] - 1s 126ms/step - loss: 0.1342 - accuracy: 0.9554 - val_loss: 0.3356 - val_accuracy: 0.8851
Epoch 27/30
11/11 [==============================] - 1s 125ms/step - loss: 0.1555 - accuracy: 0.9618 - val_loss: 0.3486 - val_accuracy: 0.8736
Epoch 28/30
11/11 [==============================] - 1s 124ms/step - loss: 0.1346 - accuracy: 0.9659 - val_loss: 0.3208 - val_accuracy: 0.9080
Epoch 29/30
11/11 [==============================] - 1s 127ms/step - loss: 0.1193 - accuracy: 0.9697 - val_loss: 0.3706 - val_accuracy: 0.8851
Epoch 30/30
11/11 [==============================] - 1s 123ms/step - loss: 0.0836 - accuracy: 0.9777 - val_loss: 0.3623 - val_accuracy: 0.8621
5/5 - 0s - loss: 0.4383 - accuracy: 0.8472

Test accuracy: 0.8472222089767456

Test loss: 0.43826407194137573

1st execution val_loss x train_loss graph

Epoch 1/30
11/11 [==============================] - 5s 190ms/step - loss: 0.8297 - accuracy: 0.5306 - val_loss: 0.8508 - val_accuracy: 0.4138
Epoch 2/30
11/11 [==============================] - 1s 123ms/step - loss: 0.8138 - accuracy: 0.5460 - val_loss: 0.8355 - val_accuracy: 0.4713
Epoch 3/30
11/11 [==============================] - 1s 120ms/step - loss: 0.8082 - accuracy: 0.5384 - val_loss: 0.8145 - val_accuracy: 0.5402
Epoch 4/30
11/11 [==============================] - 1s 118ms/step - loss: 0.7997 - accuracy: 0.5799 - val_loss: 0.7911 - val_accuracy: 0.5517
Epoch 5/30
11/11 [==============================] - 1s 117ms/step - loss: 0.7752 - accuracy: 0.6585 - val_loss: 0.7654 - val_accuracy: 0.5862
Epoch 6/30
11/11 [==============================] - 1s 125ms/step - loss: 0.7527 - accuracy: 0.6609 - val_loss: 0.7289 - val_accuracy: 0.6437
Epoch 7/30
11/11 [==============================] - 1s 121ms/step - loss: 0.7129 - accuracy: 0.7432 - val_loss: 0.6790 - val_accuracy: 0.6782
Epoch 8/30
11/11 [==============================] - 1s 125ms/step - loss: 0.6570 - accuracy: 0.7707 - val_loss: 0.6107 - val_accuracy: 0.7356
Epoch 9/30
11/11 [==============================] - 1s 125ms/step - loss: 0.6112 - accuracy: 0.7513 - val_loss: 0.5529 - val_accuracy: 0.7586
Epoch 10/30
11/11 [==============================] - 1s 129ms/step - loss: 0.5339 - accuracy: 0.8026 - val_loss: 0.4895 - val_accuracy: 0.7816
Epoch 11/30
11/11 [==============================] - 1s 120ms/step - loss: 0.4720 - accuracy: 0.8189 - val_loss: 0.4579 - val_accuracy: 0.8046
Epoch 12/30
11/11 [==============================] - 1s 121ms/step - loss: 0.4332 - accuracy: 0.8527 - val_loss: 0.4169 - val_accuracy: 0.8046
Epoch 13/30
11/11 [==============================] - 1s 122ms/step - loss: 0.3976 - accuracy: 0.8568 - val_loss: 0.3850 - val_accuracy: 0.7931
Epoch 14/30
11/11 [==============================] - 1s 124ms/step - loss: 0.3489 - accuracy: 0.8726 - val_loss: 0.3753 - val_accuracy: 0.8046
Epoch 15/30
11/11 [==============================] - 1s 124ms/step - loss: 0.3088 - accuracy: 0.9020 - val_loss: 0.3562 - val_accuracy: 0.8161
Epoch 16/30
11/11 [==============================] - 1s 124ms/step - loss: 0.3489 - accuracy: 0.8745 - val_loss: 0.3501 - val_accuracy: 0.8391
Epoch 17/30
11/11 [==============================] - 1s 130ms/step - loss: 0.2725 - accuracy: 0.9240 - val_loss: 0.3436 - val_accuracy: 0.8506
Epoch 18/30
11/11 [==============================] - 1s 121ms/step - loss: 0.3494 - accuracy: 0.8764 - val_loss: 0.3516 - val_accuracy: 0.8506
Epoch 19/30
11/11 [==============================] - 1s 119ms/step - loss: 0.2553 - accuracy: 0.9243 - val_loss: 0.3413 - val_accuracy: 0.8391
Epoch 20/30
11/11 [==============================] - 1s 122ms/step - loss: 0.2723 - accuracy: 0.9092 - val_loss: 0.3258 - val_accuracy: 0.8621
Epoch 21/30
11/11 [==============================] - 1s 121ms/step - loss: 0.2600 - accuracy: 0.9306 - val_loss: 0.3257 - val_accuracy: 0.8506
Epoch 22/30
11/11 [==============================] - 1s 126ms/step - loss: 0.2406 - accuracy: 0.9411 - val_loss: 0.3203 - val_accuracy: 0.8966
Epoch 23/30
11/11 [==============================] - 1s 127ms/step - loss: 0.1892 - accuracy: 0.9577 - val_loss: 0.3191 - val_accuracy: 0.8851
Epoch 24/30
11/11 [==============================] - 1s 127ms/step - loss: 0.1869 - accuracy: 0.9594 - val_loss: 0.3246 - val_accuracy: 0.8621
Epoch 25/30
11/11 [==============================] - 1s 122ms/step - loss: 0.1898 - accuracy: 0.9487 - val_loss: 0.3217 - val_accuracy: 0.8851
Epoch 26/30
11/11 [==============================] - 1s 125ms/step - loss: 0.1731 - accuracy: 0.9523 - val_loss: 0.3280 - val_accuracy: 0.8506
Epoch 27/30
11/11 [==============================] - 1s 128ms/step - loss: 0.1445 - accuracy: 0.9687 - val_loss: 0.3213 - val_accuracy: 0.8851
Epoch 28/30
11/11 [==============================] - 1s 117ms/step - loss: 0.1441 - accuracy: 0.9718 - val_loss: 0.3212 - val_accuracy: 0.8621
Epoch 29/30
11/11 [==============================] - 1s 124ms/step - loss: 0.1250 - accuracy: 0.9762 - val_loss: 0.3232 - val_accuracy: 0.8851
Epoch 30/30
11/11 [==============================] - 1s 123ms/step - loss: 0.1460 - accuracy: 0.9687 - val_loss: 0.3218 - val_accuracy: 0.8736
5/5 - 0s - loss: 0.3297 - accuracy: 0.8889

Test accuracy: 0.8888888955116272

Test loss: 0.32971107959747314

2nd execution val_loss x train_loss graph

一些附加信息:

  • 我的标签是热编码的。
  • 帧步长为 0.05 秒。
  • 帧大小为 0.125 秒。
  • 当使用较小的数据集运行此配置时,我的行为略有不同。损失值下降得更均匀,但有点慢。我尝试增加 epochs 数,但在第 30 个 epoch 之后,验证损失开始变化并上升。

我的问题是:

  • 可能导致此验证丢失问题的原因是什么?
  • 如果模型的损失率很高,但其准确度仍然可以,这意味着什么?
  • 我了解了二元交叉熵,但我不知道我是否理解测试中损失值的含义,有人可以帮助我理解这些值吗?
  • 这种填充策略会影响网络性能吗?
  • 考虑到 LSTM 定义,我的输入数据及其维度是否一致?
  • 这可能与我的数据集大小有关吗?
  • 可接受的验证丢失率是多少?

1 个答案:

答案 0 :(得分:0)

您的验证损失远高于训练损失通常意味着过度拟合。请注意,您的 val_loss 并不是真正,它只是高于训练损失。验证准确率也不错,只是比网络有效记忆的训练数据低得多。

基本上,您需要降低模型的强度,以便它可以泛化到手头问题的复杂性。使用更多的 dropout 和更少的参数/层。