InceptionV3 + LSTM活动识别,准确性提高了10个时期,然后下降了

时间:2019-03-28 07:35:09

标签: keras lstm yolo

我正在尝试建立模型以进行活动识别。 使用InceptionV3和骨干网以及LSTM进行检测,并使用预先训练的权重。

  train_generator = datagen.flow_from_directory(
          'dataset/train',
          target_size=(1,224, 224),
          batch_size=batch_size,
          class_mode='categorical',  # this means our generator will only yield batches of data, no labels
          shuffle=True,
          classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])

  validation_generator = datagen.flow_from_directory(
          'dataset/validate',
          target_size=(1,224, 224),
          batch_size=batch_size,
          class_mode='categorical',  # this means our generator will only yield batches of data, no labels
          shuffle=True,
          classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
  return train_generator,validation_generator

我训练了5个班级,因此将我的数据分成多个文件夹进行训练和验证。 这是我的CNN + LSTM架构

image = Input(shape=(None,224,224,3),name='image_input')
    cnn = applications.inception_v3.InceptionV3(
        weights='imagenet',
        include_top=False,
        pooling='avg')
    cnn.trainable = False
    encoded_frame = TimeDistributed(Lambda(lambda x: cnn(x)))(image)
    encoded_vid = LSTM(256)(encoded_frame)
    layer1 = Dense(512, activation='relu')(encoded_vid)
    dropout1 = Dropout(0.5)(layer1)
    layer2 = Dense(256, activation='relu')(dropout1)
    dropout2 = Dropout(0.5)(layer2)
    layer3 = Dense(64, activation='relu')(dropout2)
    dropout3 = Dropout(0.5)(layer3)
    outputs = Dense(5, activation='softmax')(dropout3)
 model = Model(inputs=[image],outputs=outputs)
    sgd = SGD(lr=0.001, decay = 1e-6, momentum=0.9, nesterov=True)
    model.compile(optimizer=sgd,loss='categorical_crossentropy', metrics=['accuracy'])

    model.fit_generator(train_generator,validation_data = validation_generator,steps_per_epoch=300, epochs=nb_epoch,callbacks=callbacks,shuffle=True,verbose=1)

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
image_input (InputLayer)     (None, None, 224, 224, 3) 0
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 2048)        0
_________________________________________________________________
lstm_1 (LSTM)                (None, 256)               2360320
_________________________________________________________________
dense_1 (Dense)              (None, 512)               131584
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0
_________________________________________________________________
dense_2 (Dense)              (None, 256)               131328
_________________________________________________________________
dropout_2 (Dropout)          (None, 256)               0
_________________________________________________________________
dense_3 (Dense)              (None, 64)                16448
_________________________________________________________________
dropout_3 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_4 (Dense)              (None, 5)                 325
_________________________________________________________________

模型可以正常编译而不会出现问题。 问题开始于训练期间。它达到val_acc = 0.50,然后又回落到val_acc = 0.30,并且损失仅冻结在0.80上,并且几乎不动。

这是来自训练的日志,因为您看到某些tome的模型有所改进,然后缓慢下降,随后冻结。 知道是什么原因吗?

Epoch 00002: val_loss improved from 1.56471 to 1.55652, saving model to ./weights_inception/Inception_V3.02-0.28.h5
Epoch 3/500
300/300 [==============================] - 66s 219ms/step - loss: 1.5436 - acc: 0.3281 - val_loss: 1.5476 - val_acc: 0.2981

Epoch 00003: val_loss improved from 1.55652 to 1.54757, saving model to ./weights_inception/Inception_V3.03-0.30.h5
Epoch 4/500
300/300 [==============================] - 66s 220ms/step - loss: 1.5109 - acc: 0.3593 - val_loss: 1.5284 - val_acc: 0.3588

Epoch 00004: val_loss improved from 1.54757 to 1.52841, saving model to ./weights_inception/Inception_V3.04-0.36.h5
Epoch 5/500
300/300 [==============================] - 66s 221ms/step - loss: 1.4167 - acc: 0.4167 - val_loss: 1.4945 - val_acc: 0.3553

Epoch 00005: val_loss improved from 1.52841 to 1.49446, saving model to ./weights_inception/Inception_V3.05-0.36.h5
Epoch 6/500
300/300 [==============================] - 66s 221ms/step - loss: 1.2941 - acc: 0.4683 - val_loss: 1.4735 - val_acc: 0.4443

Epoch 00006: val_loss improved from 1.49446 to 1.47345, saving model to ./weights_inception/Inception_V3.06-0.44.h5
Epoch 7/500
300/300 [==============================] - 66s 221ms/step - loss: 1.2096 - acc: 0.5116 - val_loss: 1.3738 - val_acc: 0.5186

Epoch 00007: val_loss improved from 1.47345 to 1.37381, saving model to ./weights_inception/Inception_V3.07-0.52.h5
Epoch 8/500
300/300 [==============================] - 66s 221ms/step - loss: 1.1477 - acc: 0.5487 - val_loss: 1.2337 - val_acc: 0.5788

Epoch 00008: val_loss improved from 1.37381 to 1.23367, saving model to ./weights_inception/Inception_V3.08-0.58.h5
Epoch 9/500
300/300 [==============================] - 66s 221ms/step - loss: 1.0809 - acc: 0.5831 - val_loss: 1.2247 - val_acc: 0.5658

Epoch 00009: val_loss improved from 1.23367 to 1.22473, saving model to ./weights_inception/Inception_V3.09-0.57.h5
Epoch 10/500
300/300 [==============================] - 66s 221ms/step - loss: 1.0362 - acc: 0.6089 - val_loss: 1.1704 - val_acc: 0.5774

Epoch 00010: val_loss improved from 1.22473 to 1.17035, saving model to ./weights_inception/Inception_V3.10-0.58.h5
Epoch 11/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9811 - acc: 0.6317 - val_loss: 1.1612 - val_acc: 0.5616

Epoch 00011: val_loss improved from 1.17035 to 1.16121, saving model to ./weights_inception/Inception_V3.11-0.56.h5
Epoch 12/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9444 - acc: 0.6471 - val_loss: 1.1533 - val_acc: 0.5613

Epoch 00012: val_loss improved from 1.16121 to 1.15330, saving model to ./weights_inception/Inception_V3.12-0.56.h5
Epoch 13/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9072 - acc: 0.6650 - val_loss: 1.1843 - val_acc: 0.5361

Epoch 00013: val_loss did not improve from 1.15330
Epoch 14/500
300/300 [==============================] - 66s 221ms/step - loss: 0.8747 - acc: 0.6744 - val_loss: 1.2135 - val_acc: 0.5258

Epoch 00014: val_loss did not improve from 1.15330
Epoch 15/500
300/300 [==============================] - 67s 222ms/step - loss: 0.8666 - acc: 0.6829 - val_loss: 1.1585 - val_acc: 0.5443

Epoch 00015: val_loss did not improve from 1.15330
Epoch 16/500
300/300 [==============================] - 66s 222ms/step - loss: 0.8386 - acc: 0.6926 - val_loss: 1.1503 - val_acc: 0.5482

Epoch 00016: val_loss improved from 1.15330 to 1.15026, saving model to ./weights_inception/Inception_V3.16-0.55.h5
Epoch 17/500
300/300 [==============================] - 66s 221ms/step - loss: 0.8199 - acc: 0.7023 - val_loss: 1.2162 - val_acc: 0.5288

Epoch 00017: val_loss did not improve from 1.15026
Epoch 18/500
300/300 [==============================] - 66s 222ms/step - loss: 0.8018 - acc: 0.7150 - val_loss: 1.1995 - val_acc: 0.5179

Epoch 00018: val_loss did not improve from 1.15026
Epoch 19/500
300/300 [==============================] - 66s 221ms/step - loss: 0.7923 - acc: 0.7186 - val_loss: 1.2218 - val_acc: 0.5137

Epoch 00019: val_loss did not improve from 1.15026
Epoch 20/500
300/300 [==============================] - 67s 222ms/step - loss: 0.7748 - acc: 0.7268 - val_loss: 1.2880 - val_acc: 0.4574

Epoch 00020: val_loss did not improve from 1.15026
Epoch 21/500
300/300 [==============================] - 66s 221ms/step - loss: 0.7604 - acc: 0.7330 - val_loss: 1.2658 - val_acc: 0.4861

2 个答案:

答案 0 :(得分:0)

模型开始过拟合。理想情况下,随着时期的增加,训练损失将减少(取决于学习率),如果不能减少,则可能是您的模型对数据可能具有较高的偏差。您可以使用更大的模型(更多参数或更深层次的模型)。

您还可以降低学习率,如果仍然冻结,则模型的偏见可能较低。

答案 1 :(得分:0)

感谢您的帮助。是的,问题是过拟合的,所以我在LSTM上提出了更激进的退学规定,这很有帮助。但是val_loss和acc_val的准确性仍然很低

    video = Input(shape=(None, 224,224,3))
cnn_base = VGG16(input_shape=(224,224,3),
                weights="imagenet",
                include_top=False)
cnn_out = GlobalAveragePooling2D()(cnn_base.output)
cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
cnn.trainable = False
encoded_frames = TimeDistributed(cnn)(video)
encoded_sequence = LSTM(32, dropout=0.5, W_regularizer=l2(0.01), recurrent_dropout=0.5)(encoded_frames)
hidden_layer = Dense(units=64, activation="relu")(encoded_sequence)
dropout = Dropout(0.2)(hidden_layer)
outputs = Dense(5, activation="softmax")(dropout)
model = Model([video], outputs)

这里是日志

Epoch 00033: val_loss improved from 1.62041 to 1.57951, saving model to 
./weights_inception/Inception_V3.33-0.76.h5
Epoch 34/500
100/100 [==============================] - 54s 537ms/step - loss: 0.6301 - acc: 
0.9764 - val_loss: 1.6190 - val_acc: 0.7627

Epoch 00034: val_loss did not improve from 1.57951
Epoch 35/500
100/100 [==============================] - 54s 537ms/step - loss: 0.5907 - acc: 
0.9840 - val_loss: 1.5927 - val_acc: 0.7608

Epoch 00035: val_loss did not improve from 1.57951
Epoch 36/500
100/100 [==============================] - 54s 537ms/step - loss: 0.5783 - acc: 
0.9812 - val_loss: 1.3477 - val_acc: 0.7769

Epoch 00036: val_loss improved from 1.57951 to 1.34772, saving model to 
./weights_inception/Inception_V3.36-0.78.h5
Epoch 37/500
100/100 [==============================] - 54s 537ms/step - loss: 0.5618 - acc: 
0.9802 - val_loss: 1.6545 - val_acc: 0.7384

Epoch 00037: val_loss did not improve from 1.34772
Epoch 38/500
100/100 [==============================] - 54s 537ms/step - loss: 0.5382 - acc: 
0.9818 - val_loss: 1.8298 - val_acc: 0.7421

Epoch 00038: val_loss did not improve from 1.34772
Epoch 39/500
100/100 [==============================] - 54s 536ms/step - loss: 0.5080 - acc: 
0.9844 - val_loss: 1.7948 - val_acc: 0.7290

Epoch 00039: val_loss did not improve from 1.34772
Epoch 40/500
100/100 [==============================] - 54s 537ms/step - loss: 0.4800 - acc: 
0.9892 - val_loss: 1.8036 - val_acc: 0.7522