如何改善模型损失和准确性?

时间:2020-05-21 04:52:37

标签: python tensorflow machine-learning keras deep-learning

我目前正在使用从kaggle入门代码中获取的Unet模型,并修改了几个参数以在TACO数据集上对它进行分类训练。现在,我对如何继续优化模型一无所知。我正经历着令人难以置信的损失和糟糕的准确性,而且我不完全确定哪些参数可以提高模型的准确性和损失。 TACO数据集有60个类别(61个包括背景)。难道我做错了什么?我对此很陌生,因此,我能阅读或建议的任何参考文献将不胜感激。

这是我的模型的代码:

IMG_WIDTH = 224
IMG_HEIGHT = 224
IMG_CHANNELS = 3
epochs = 25
validation_steps = val_size
steps_per_epoch = train_size

##Creating the model

initializer = "he_normal"

###Building U-Net Model

##Input Layer
inputs = Input((IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS))

##Converting inputs to float
s = tf.keras.layers.Lambda(lambda x: x / 255)(inputs)

##Contraction
c1 = tf.keras.layers.Conv2D(16, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(s)
c1 = tf.keras.layers.Dropout(0.1)(c1)
c1 = tf.keras.layers.Conv2D(16, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c1)
p1 = tf.keras.layers.MaxPooling2D((2,2))(c1)

c2 = tf.keras.layers.Conv2D(32, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(p1)
c2 = tf.keras.layers.Dropout(0.1)(c2)
c2 = tf.keras.layers.Conv2D(32, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c2)
p2 = tf.keras.layers.MaxPooling2D((2,2))(c2)

c3 = tf.keras.layers.Conv2D(64, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(p2)
c3 = tf.keras.layers.Dropout(0.2)(c3)
c3 = tf.keras.layers.Conv2D(64, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c3)
p3 = tf.keras.layers.MaxPooling2D((2,2))(c3)

c4 = tf.keras.layers.Conv2D(128, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(p3)
c4 = tf.keras.layers.Dropout(0.2)(c4)
c4 = tf.keras.layers.Conv2D(128, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c4)
p4 = tf.keras.layers.MaxPooling2D((2,2))(c4)

c5 = tf.keras.layers.Conv2D(256, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(p4)
c5 = tf.keras.layers.Dropout(0.3)(c5)
c5 = tf.keras.layers.Conv2D(256, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c5)

##Expansion
u6 = tf.keras.layers.Conv2DTranspose(128, (2,2), strides=(2,2), padding="same")(c5)
u6 = tf.keras.layers.concatenate([u6, c4])
c6 = tf.keras.layers.Conv2D(128, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(u6)
c6 = tf.keras.layers.Dropout(0.2)(c6)
c6 = tf.keras.layers.Conv2D(128, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c6)

u7 = tf.keras.layers.Conv2DTranspose(64, (2,2), strides=(2,2), padding="same")(c6)
u7 = tf.keras.layers.concatenate([u7, c3])
c7 = tf.keras.layers.Conv2D(64, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(u7)
c7 = tf.keras.layers.Dropout(0.2)(c7)
c7 = tf.keras.layers.Conv2D(64, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c7)

u8 = tf.keras.layers.Conv2DTranspose(32, (2,2), strides=(2,2), padding="same")(c7)
u8 = tf.keras.layers.concatenate([u8, c2])
c8 = tf.keras.layers.Conv2D(32, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(u8)
c8 = tf.keras.layers.Dropout(0.1)(c8)
c8 = tf.keras.layers.Conv2D(32, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c8)

u9 = tf.keras.layers.Conv2DTranspose(16, (2,2), strides=(2,2), padding="same")(c8)
u9 = tf.keras.layers.concatenate([u9, c1], axis=3)
c9 = tf.keras.layers.Conv2D(16, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(u9)
c9 = tf.keras.layers.Dropout(0.1)(c9)
c9 = tf.keras.layers.Conv2D(16, (3,3), activation="relu", kernel_initializer=initializer, padding="same")(c9)

##Output Layer
outputs = tf.keras.layers.Dense(61, activation="softmax")(c9)

##Defining Model
model = tf.keras.Model(inputs=[inputs], outputs=[outputs])

##Compiling Model
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=['accuracy'])

##Training the model
results = model.fit(x = train_gen, 
                    validation_data = val_gen, 
                    steps_per_epoch = steps_per_epoch, 
                    validation_steps = validation_steps, 
                    epochs = epochs, 
                    verbose = True)

这是第一个时期的准确性和损失:

Epoch 1/25
 185/1200 [===>..........................] - ETA: 3:30:04 - loss: 388.0077 - accuracy: 9.0721e-04

我目前正在使用张量板,模型检查点和回调的提前停止,但是不幸的是,我不知道它们如何帮助优化模型。每层更大数量的神经元会起作用吗?

2 个答案:

答案 0 :(得分:0)

我想,您对训练速度不满意:ETA 3:30:04。通常,模型应该训练几个时期以显着减少损失。但是每个时期等待4个小时并不酷,是吗?您可以做几件事:

  • 请确保您在GPU上训练模型,因为在CPU和GPU上训练之间的差异太疯狂了
  • 您可以尝试简化模型
  • 或者,如果您想使用复杂的模型,但又没有太多时间进行培训,请使用转移学习

在转移学习中,您可以使用预训练的模型,添加自己的图层并进行再训练。这是一个示例:

from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import *

base_model = MobileNetV2(
    include_top=False, 
    input_shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)
)
base_model.trainable = False

layer = Dense(256, activation='relu')(base_model.output)
layer = BatchNormalization()(layer)
out = Dense(61, activation='softmax')(layer)

model = Model(inputs=base_model.input, outputs=out)

答案 1 :(得分:-1)

我同意Yoshutik的回答。 实际上,您应该在GPU上训练像您一样的模型。由于您的图像数据集中包含尺寸为225 * 225的3通道图像,因此计算量非常大。您可以将通道数减少到1。 说到模型的结构,可以通过考虑输入输出关系来消除至少3个卷积层。 还有一件事是,您可以在不使用默认值的情况下操作优化器的超参数。 这是代码:

model.compile(optimizer = tf.keras.optimizers.Adam(0.0076),loss = tf.keras.losses.CategoricalCrossentropy(),metrics = ['mse'])

实际上,我不确定Tensorflow支持的准确性,因此您应该将指标更改为其他指标。