深度CNN不能学习,准确性只能保持不变

时间:2020-03-09 21:01:45

标签: opencv tensorflow keras computer-vision resnet

我有一个基于ResNet的Deep CNN和一个用于对数字进行分类的数据集(10000,50,50,1)。当我运行它开始倾斜时,精度只是停止在某个值上并逐渐波动(约0.2)。我想知道它是否过度拟合或涉及其他问题?

这是标识块:

def identity_block(X, f, filters, stage, block):
# defining name basics
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'

# retrieve filters
F1, F2, F3 = filters

# save the shortcut
X_shortcut = X

# first component
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2a',
           kernel_initializer=initializers.glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)

# second component
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b',
           kernel_initializer=initializers.glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)

# third component
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c',
           kernel_initializer=initializers.glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)


# final component
X = Add()([X, X_shortcut])
X = Activation('relu')(X)

return X

和卷积块:

def conv_block(X, f, filters, stage, block, s=2):
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'

# Retivr filters
F1, F2, F3 = filters

# Save shortcut
X_shortcut = X

# First component
X = Conv2D(F1, kernel_size=(1, 1), strides=(s, s), name=conv_name_base + '2a',
           kernel_initializer=initializers.glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)

# Second component
X = Conv2D(F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b',
           kernel_initializer=initializers.glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)

# third component
X = Conv2D(F3, kernel_size=(1, 1), strides=(1, 1), name=conv_name_base + '2c', padding='valid',
           kernel_initializer=initializers.glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)

# short cut
X_shortcut = Conv2D(F3, kernel_size=(1, 1), strides=(s, s), name=conv_name_base + '1',
                    kernel_initializer=initializers.glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut)

# finaly
X = Add()([X, X_shortcut])
X = Activation('relu')(X)

return X

最后确定ResNet:

def ResNet( input_shape=(50, 50, 1), classes=10):
inp = Input(shape=(50,50,1))
# zero padding
X = ZeroPadding2D((3, 3), name='pad0')(inp)

# stage1
X = Conv2D(32, (5,5), name='conv1', input_shape=input_shape,
           kernel_initializer=initializers.glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((2,2), name='pool1')(X)

# Stage 2
stage2_filtersize = 32
X = conv_block(X, 3, filters=[stage2_filtersize, stage2_filtersize, stage2_filtersize], stage=2, block='a', s=1)
X = identity_block(X, 3, [stage2_filtersize,stage2_filtersize, stage2_filtersize], stage=2, block='b')
X = identity_block(X, 3, [stage2_filtersize, stage2_filtersize, stage2_filtersize], stage=2, block='c')

# Stage 3
stage3_filtersize = 64
X = conv_block(X, 3, filters=[stage3_filtersize, stage3_filtersize, stage3_filtersize], stage=3, block='a', s=1)
X = identity_block(X, 3, [stage3_filtersize, stage3_filtersize, stage3_filtersize], stage=3, block='b')
X = identity_block(X, 3, [stage3_filtersize, stage3_filtersize, stage3_filtersize], stage=3, block='c')

# Stage 4
stage4_filtersize = 128
X = conv_block(X, 3, filters=[stage4_filtersize, stage4_filtersize, stage4_filtersize], stage=4, block='a', s=1)
X = identity_block(X, 3, [stage4_filtersize, stage4_filtersize, stage4_filtersize], stage=4, block='b')
X = identity_block(X, 3, [stage4_filtersize, stage4_filtersize, stage4_filtersize], stage=4, block='c')

# final
X = AveragePooling2D((2, 2), padding='same', name='Pool0')(X)

# FC
X = Flatten(name='D0')(X)
X = Dense(classes, activation='softmax', kernel_initializer=initializers.glorot_uniform(seed=0), name='D2')(X)

# creat model

model = Model(inputs=inp, outputs=X)

return model

更新1::这是拟合和编译方法:

model.compile(optimizer='adam',
          loss=tensorflow.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
          metrics=['accuracy'])

model.compile(optimizer='adam',
          loss=tensorflow.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
          metrics=['accuracy'])

print("model compiled settings imported successfully")
early_stopping = EarlyStopping(monitor='val_loss', patience=2)

model.fit(X_train, Y_train, validation_split=0.2, callbacks=[early_stopping], epochs=10)

test_loss, test_acc = model.evaluate(X_test, Y_test, verbose=2)

2 个答案:

答案 0 :(得分:1)

首先尝试规格化数字图像的值(50x50)。

然后还要考虑神经网络如何学习权重。 卷积神经网络通过不断地添加梯度误差矢量(通过向后传播计算计算得出的学习强度乘以学习率)来进行学习通过训练示例传递整个网络中的权重矩阵。

要考虑的最重要的事情是学习率的倍增,因为一旦我们没有对训练输入进行扩展,特征值的分布范围可能会与每个特征有所不同,因此,学习率将导致每个维度的校正彼此不同。这是随机的,因此机器可能会在一个称重尺寸上过度补偿校正而在另一个称重尺寸上补偿不足。这是非常不理想的,因为这可能会导致振荡状态或非常慢的训练状态

振荡表示模型无法定位中心以获取更大的权重。
慢速训练是指运动太慢而无法获得更好的最大值。

这就是为什么在将图像用作神经网络或任何模型 < strong>基于渐变。

答案 1 :(得分:1)

  • TF_Support的答案:

提供少量的数据集样本,损失曲线,准确性图,以便我们可以清楚地了解您要学习的内容,这比您提供的代码更重要。

我想,您正在尝试学习非常困难的样本,50by50灰度并不多。您的网络是否适合? (我们只需要查看一些验证指标图即可得出结论)(0.2代表您的训练准确性吗?)

首先,通过训练非常简单的CNN对数据集进行完整性检查。我看到您有10个类(不确定,仅从函数的默认值中猜测),随机精度为10%,因此请首先使用简单的CNN设置基线,然后尝试使用ResNet进行改进。

提高学习率,看看准确性如何波动。经过几个时期后,如果准确度优于基线,则降低学习率。