自定义TF-Keras图层的性能比内置图层差

时间:2019-02-14 14:04:59

标签: python-3.x tensorflow keras

我正在使用Tensorflow和使用自定义层的TF-keras API编写CNN网络。这个自定义层使用附加的遮罩信息执行2D卷积,但结果却不尽人意(150个历元时的准确度约为50%)。

为了测试我的图层,我将其剥离以执行正常的2D卷积,并使用内置的keras.layers.Conv2D实现了类似的网络。使用我的自定义层,我仍然只能达到〜50%的精度,但是使用内置功能,我可以达到〜97%。为了对其进行进一步测试,我创建了一个自定义层,只将值传播到Conv2D层中的内部版本,但仍仅以50%左右的精度运行。

具有内置图层的模型:

image_input = Input(shape=(32,32,3,),dtype='float32',
    name='Image_Input')
mask_input = Input(shape=(32,32,),dtype='int32',name='Mask_Input')
weight_decay = 1e-4
conv1 = Conv2D(32, kernel_size=(3,3,),padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(image_input)
bn1 = BatchNormalization()(conv1)
conv2 = Conv2D(32, kernel_size=(3,3,),padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(bn1)
bn2 = BatchNormalization()(conv2)
ml1 = MaxPool2D(pool_size=(2,2))(bn2)
do1 = Dropout(0.2)(ml1)

conv3 = Conv2D(64, kernel_size=(3,3,),padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(do1)
bn3 = BatchNormalization()(conv3)
conv4 = Conv2D(64, kernel_size=(3,3,),padding='same'
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(bn3)
bn4 = BatchNormalization()(conv4)
mp2 = MaxPool2D(pool_size=(2,2))(bn4)
do2 = Dropout(0.3)(mp2)

conv5 = Conv2D(128, kernel_size=(3,3,),padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(do2)
bn5 = BatchNormalization()(conv5)
conv6 = Conv2D(128, kernel_size=(3,3,),padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(bn5)
bn6 = BatchNormalization()(conv6)
mp3 = MaxPool2D(pool_size=(2,2))(bn6)
do3 = Dropout(0.4)(mp3)

flat = Flatten()(do3)
output = Dense(10,activation='softmax')(flat)

model = Model(inputs=[image_input,mask_input],outputs=[output])

带有自定义图层的模型:

image_input = Input(shape=(32,32,3,),dtype='float32',
    name='Image_Input')
mask_input = Input(shape=(32,32,), dtype='int32',name='Mask_Input')
weight_decay = 1e-4

ml1 = ML2(32,
    (3,3,),
    padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(image_input)
bn1 = BatchNormalization()(ml1)
ml2 = ML2(32,(3,3,),
    padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(bn1)
bn2 = BatchNormalization()(ml2)
mp1 = MaxPool2D(pool_size=(2,2))(bn2)
do1 = Dropout(0.2)(mp1)

ml3 = ML2(64,(3,3,),
    padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(do1)
bn3 = BatchNormalization()(ml3)
ml4 = ML2(64,(3,3,),
    padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(bn3)
bn4 = BatchNormalization()(ml4)

mp2 = MaxPool2D(pool_size=(2,2))(bn4)
do2 = Dropout(0.3)(mp2)

ml5 = ML2(128,(3,3,),
    padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(do2)
bn5 = BatchNormalization()(ml5)   
ml6 = ML2(128,(3,3,),
    padding='same',
    kernel_regularizer=regularizers.l2(weight_decay),
    activation='elu')(bn5)
bn6 = BatchNormalization()(ml6)
mp3 = MaxPool2D(pool_size=(2,2))(bn6)
do3 = Dropout(0.4)(mp3)

flat = Flatten()(do3)
output = Dense(10, activation='relu')(flat)
model = Model(inputs=[image_input,mask_input], outputs=[output])

自定义层:

class ML2(Conv2D):
    def __init__(self,
        filters,
        kernel_size,
        strides=1,
        padding='valid',
        data_format=None,
        activation=None,
        dilation_rate=1,
        use_bias=True,
        kernel_initializer='glorot_uniform',
        bias_initializer='zeros',
        kernel_regularizer=None,
        bias_regularizer=None,
        activity_regularizer=None,
        kernel_constraint=None,
        bias_constraint=None,
        trainable=True,
        name=None,
        **kwargs):

        super(ML2,self).__init__(
            filters,
            kernel_size,
            strides=strides,
            padding=padding,
            data_format=data_format,
            activation=activation,
            dilation_rate=dilation_rate,
            use_bias=use_bias,
            kernel_initializer=kernel_initializer,
            bias_initializer=bias_initializer,
            kernel_regularizer=kernel_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            kernel_constraint=kernel_constraint,
            bias_constraint=bias_constraint,
            trainable=trainable,
            name=name,
            **kwargs)

据我所知,两个模型应该做的一样。即使由于随机性而导致轻微误差,两个模型的精度也应该是可比的,但是通过内置层,它的精度几乎提高了一倍。通过多次运行和自定义层的不同实现,这一差异是一致的。

使用在线资源检查我的自定义层设置后,我没有找到实现错误,或者说自定义层的性能比内置层差。

我在这里很茫然,为每个主意感到高兴。 谢谢

0 个答案:

没有答案