连体模型不学习任何东西,总是将图像编码为零向量

时间:2019-06-06 13:17:59

标签: python machine-learning keras deep-learning similarity

我正在尝试训练一个暹罗模型来预测两个图像中书写的单词是否相同。除了这种模式外,还应该能够区分两个人的写作。该问题类似于签名验证问题。

我的基本网络如下:

def create_base_network_signet(input_shape):
    '''Base Siamese Network'''

    seq = Sequential()
    seq.add(Conv2D(96, kernel_size=(7,7), strides=2, input_shape= input_shape, activation='relu'))
    seq.add(BatchNormalization())
    seq.add(ZeroPadding2D(padding=(2, 2)))

    seq.add(Conv2D(96, kernel_size=(7,7), strides=1, activation='relu'))
    seq.add(BatchNormalization())
    seq.add(MaxPooling2D(pool_size=(3, 3), strides=2))
    seq.add(ZeroPadding2D(padding=(1, 1)))

    seq.add(Conv2D(128, kernel_size=(5,5), strides=1, activation='relu'))
    seq.add(Conv2D(128, kernel_size=(5,5), strides=1, activation='relu'))
    seq.add(MaxPooling2D(pool_size=(3, 3), strides=2))
    seq.add(Dropout(0.3))
    seq.add(ZeroPadding2D(padding=(1, 1)))

    seq.add(Conv2D(384, kernel_size=(3,3), strides=1, activation='relu'))
    seq.add(Conv2D(256, kernel_size=(3,3), strides=1, activation='relu'))
    seq.add(BatchNormalization())
    seq.add(MaxPooling2D(pool_size=(3,3), strides=2))
    seq.add(Dropout(0.3))
    seq.add(ZeroPadding2D(padding=(1,1)))

    seq.add(Conv2D(128, kernel_size=(2,2), strides=1, activation='relu'))
    seq.add(Dropout(0.3))

    seq.add(Flatten(name='flatten'))
    seq.add(Dense(1024, W_regularizer=l2(0.0005), activation='relu', init='glorot_uniform'))
    seq.add(Dropout(0.4))

    seq.add(Dense(128, W_regularizer=l2(0.0005), activation='relu', init='glorot_uniform')) # softmax changed to relu

    return seq

最终模型(用于对比损失):

base_network = create_base_network_signet(input_shape)
input_a = Input(shape=(input_shape), name="first")
input_b = Input(shape=(input_shape), name="second")

processed_a = base_network(input_a)
processed_b = base_network(input_b)

distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b])

model = Model(input=[input_a, input_b], output=distance)

除此模型外,我还尝试了其他更简单的模型作为基础模型。我也曾尝试将VGG16和Inception等模型训练为基本模型。在训练所有这些模型时,我遇到了相同的问题。模型最终学习将输入图像编码为零向量。

我尝试过三重损失和对比损失来训练模型。两者最终都具有预测零的相同问题。对比损失函数取自keras个教程。三重损失定义为:

def triplet_loss(y_true, y_pred, alpha = 0.5):
    anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
    pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1)
    neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1)
    basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
    loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))

    return loss

我还想提到当我使用binary_crossentropy损失函数训练模型时。该模型开始学习编码。但是,在精度达到82%左右之后,精度就停止了提高,但损耗却不断降低。

这是在三元组丢失和对比丢失的情况下输出编码的样子:

output of model

我的训练数据如下:

data example

1 个答案:

答案 0 :(得分:1)

我的一个暹罗网络中受过三重损失训练的我遇到了同样的问题。对我来说,技巧是从tf.reduce_sum()行中删除loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))部分。我有关三元组丢失代码的相关代码段如下。

# distance between the anchor and the positive
pos_dist = K.sum(K.square(anchor-positive),axis=1)

# distance between the anchor and the negative
neg_dist = K.sum(K.square(anchor-negative),axis=1)

# compute loss
basic_loss = pos_dist-neg_dist+alpha
loss = K.maximum(basic_loss,0.0)

最后,当您编译模型时,请执行以下操作。

model.compile(optimizer=Adam(), loss=triplet_loss)

我相信,reduce_sum()被指定为triplet_loss时,喀拉斯人会在训练中注意loss部分。

尝试一下,看看是否有帮助。