验证损失和验证准确性曲线波动

时间:2019-11-14 23:25:45

标签: python tensorflow keras conv-neural-network mobilenet

我目前正在研究神经网络,尝试了解CNN时遇到问题,我正在尝试训练包含有关音乐流派的频谱图的数据。我的数据包括27000张频谱图,分为3类(流派)。我的数据按9:1的比例进行划分,以进行培训和验证

谁能帮助我,为什么我的验证损失/准确性的结果会波动?我正在使用Keras的MobileNetV2,并将其与3个Dense层连接。这是我的代码段:

train_datagen = ImageDataGenerator(
    preprocessing_function=preprocess_input,
    validation_split=0.1)

train_generator = train_datagen.flow_from_dataframe(
    dataframe=traindf,
    directory="...",
    color_mode='rgb',
    x_col="ID",
    y_col="Class",
    subset="training",
    batch_size=32,
    seed=42,
    shuffle=True,
    class_mode="categorical",
    target_size=(64, 64))

valid_generator = train_datagen.flow_from_dataframe(
    dataframe=traindf,
    directory="...",
    color_mode='rgb',
    x_col="ID",
    y_col="Class",
    subset="validation",
    batch_size=32,
    seed=42,
    shuffle=True,
    class_mode="categorical",
    target_size=(64, 64))

base_model = MobileNetV2(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1025, activation='relu')(x)
x = Dense(1025, activation='relu')(x)
x = Dense(512, activation='relu')(x)
preds = Dense(3, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=preds)

model.compile(optimizer='adam', loss='categorical_crossentropy',
                  metrics=['accuracy'])

step_size_train = train_generator.n//train_generator.batch_size
step_size_valid = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(
    generator=train_generator,
    steps_per_epoch=step_size_train,
    validation_data=valid_generator,
    validation_steps=step_size_valid,
    epochs=75)

这些是我的验证损失和验证准确性曲线的图片,它们波动很大

有没有减少或改善波动的方法?我在这里有过度拟合或拟合不足的问题吗?我曾尝试使用Dropout(),但它只会使情况变得更糟。我该怎么做才能解决此问题?

谢谢, Aquilla Setiawan Kanadi。

1 个答案:

答案 0 :(得分:0)

首先,缺少验证损失和验证准确性的图片。

要回答您的问题,以下可能是验证丢失和验证准确性波动的原因-

  1. 您已经向base_model添加了大约1.25倍的权重以构建模型。 (model Trainable Parameters 5115398 - base_model Trainable Parameters 2223872 = 2891526)

程序统计信息:

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense
from keras.utils.layer_utils import count_params

class color:
   PURPLE = '\033[95m'
   CYAN = '\033[96m'
   DARKCYAN = '\033[36m'
   BLUE = '\033[94m'
   GREEN = '\033[92m'
   YELLOW = '\033[93m'
   RED = '\033[91m'
   BOLD = '\033[1m'
   UNDERLINE = '\033[4m'
   END = '\033[0m'

base_model = tf.keras.applications.MobileNetV2(weights='imagenet', include_top=False)

#base_model.summary()
trainable_count = count_params(base_model.trainable_weights)
non_trainable_count = count_params(base_model.non_trainable_weights)
print("\n",color.BOLD + '  base_model Statistics !' + color.END)
print("Trainable Parameters :", color.BOLD + str(trainable_count) + color.END)
print("Non Trainable Parameters :", non_trainable_count,"\n")

x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1025, activation='relu')(x)
x = Dense(1025, activation='relu')(x)
x = Dense(512, activation='relu')(x)
preds = Dense(3, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=preds)

#model.summary()
trainable_count = count_params(model.trainable_weights)
non_trainable_count = count_params(model.non_trainable_weights)
print(color.BOLD + '    model Statistics !' + color.END)
print("Trainable Parameters :", color.BOLD + str(trainable_count) + color.END)
print("Non Trainable Parameters :", non_trainable_count,"\n")

new_weights_added = count_params(model.trainable_weights) - count_params(base_model.trainable_weights)
print("Additional trainable weights added to the model excluding basel model trainable weights :", color.BOLD + str(new_weights_added) + color.END)

输出-

WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

   base_model Statistics !
Trainable Parameters : 2223872
Non Trainable Parameters : 34112 

    model Statistics !
Trainable Parameters : 5115398
Non Trainable Parameters : 34112 

Additional trainable weights added to the model excluding basel model trainable weights : 2891526
  1. 您正在训练完整的模型权重(MobileNetV2权重和其他图层权重)。

您的问题的解决方案是-

  1. 以与base_model可训练参数相比最小的新可训练参数的方式自定义附加层。可以添加最大池化层,而减少密集层。

  2. base_model.trainable = False冻结基本模型,然后训练在MobileNetV2层之上添加的新层。

解冻基础模型的顶层(MobileNetV2层),并将底层设置为不可训练。您可以按照以下步骤进行操作,在该过程中,我们将模型冻结到第100层,其余的层将是可训练的-

# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))

# Fine-tune from this layer onwards
fine_tune_at = 100

# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
  layer.trainable =  False

输出-

Number of layers in the base model:  155
  1. 通过超参数调整来训练模型。您可以找到有关超级参数调整here的更多信息。