训练神经网络会降低准确性

时间:2020-10-26 20:46:48

标签: python tensorflow machine-learning neural-network object-detection-api

我的最终目标是获得一个神经网络模型,该模型将查看实时视频提要,并确定一个人是否拿着刀子或其他诸如叉子或勺子的器具。我尝试了在COCO数据集上进行预训练的几个模型,并意识到诸如faster_RCNN之类的模型具有令人难以置信的高精度,但它们无法处理实时视频,因为进行每个预测都需要花费几秒钟的时间。另一方面,像SSD_mobilenet这样的模型在实时视频上确实表现不错,但对于银器而言,准确度要低得多。

因为这个原因,我们希望重新训练Mobilenet模型以使其具有更好的刀具精度,所以我只用刀具创建了COCO数据集的子集。我使用了来自tensorflow教程的以下代码来进行再训练模型:

import itertools
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub

print("TF version:", tf.__version__)
print("Hub version:", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")

module_selection = ("mobilenet_v2_100_224", 224) 
handle_base, pixels = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {}".format(MODULE_HANDLE, IMAGE_SIZE))

BATCH_SIZE = 32

data_dir = 'coco/knife_dataset2014'

datagen_kwargs = dict(rescale=1./255, validation_split=.20)
dataflow_kwargs = dict(target_size=IMAGE_SIZE, batch_size=BATCH_SIZE,
                   interpolation="bilinear")

valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
    **datagen_kwargs)
valid_generator = valid_datagen.flow_from_directory(
    'coco/knife_dataset2014/val', subset="validation", shuffle=False, **dataflow_kwargs)

do_data_augmentation = False 
if do_data_augmentation:
  train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
      rotation_range=40,
      horizontal_flip=True,
      width_shift_range=0.2, height_shift_range=0.2,
      shear_range=0.2, zoom_range=0.2,
      **datagen_kwargs)
else:
  train_datagen = valid_datagen
train_generator = train_datagen.flow_from_directory(
    'coco/knife_dataset2014/train', subset="training", shuffle=True, **dataflow_kwargs)

do_fine_tuning = False 

print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
    # Explicitly define the input shape so the model can be properly
    # loaded by the TFLiteConverter
    tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
    hub.KerasLayer(MODULE_HANDLE, trainable=do_fine_tuning),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(train_generator.num_classes,
                          kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None,)+IMAGE_SIZE+(3,))
model.summary()

model.compile(
  optimizer=tf.keras.optimizers.SGD(lr=0.005, momentum=0.9), 
  loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1),
  metrics=['accuracy'])

steps_per_epoch = train_generator.samples // train_generator.batch_size
validation_steps = valid_generator.samples // valid_generator.batch_size
hist = model.fit(
    train_generator,
    epochs=2, steps_per_epoch=steps_per_epoch,
    validation_data=valid_generator,
    validation_steps=validation_steps).history

saved_model_path = "saved_knife_model"
tf.saved_model.save(model, saved_model_path)

当我运行它时,它会打印:

GPU is NOT AVAILABLE
Using https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4 with input size (224, 224)
Found 50 images belonging to 1 classes.
Found 2956 images belonging to 1 classes.
Building model with https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
keras_layer (KerasLayer)     (None, 1280)              2257984   
_________________________________________________________________
dropout (Dropout)            (None, 1280)              0         
_________________________________________________________________
dense (Dense)                (None, 1)                 1281      
=================================================================
Total params: 2,259,265
Trainable params: 1,281
Non-trainable params: 2,257,984

,然后随着训练的进行,它的准确性始终会降低。训练后的模型的准确性似乎始于每次的不同准确性(有时为0.6,有时为0.2),并且在每个时期大多下降。最近的一次测试的准确度始于0.26,到了第100个时期,准确度降至0.22。有谁知道这里发生的事情吗?

1 个答案:

答案 0 :(得分:0)

如果在训练过程中损失减少,但准确性也同样良好,则该模型很可能与训练数据拟合,因此验证数据的准确性降低(只要您的训练和验证数据为分开,它们看起来-应该-)。

Here's很好的文章,介绍了如何减轻这种影响(请直接跳至第6部分,以了解如何防止这种影响)。

如果使用预先训练的faster_RCNN模型,您说的是准确但缓慢的模型,则可以尝试在GPU上使用它来计算预测值,这可能会使它足够快才能可用。