暹罗网络没有接受过培训

时间:2018-05-11 17:29:31

标签: python-3.x tensorflow keras

我已经基于Keras示例实现了暹罗网络。我的代码如下:

def contrastive_loss(y_true, y_pred):
    '''Contrastive loss from Hadsell-et-al.'06
    http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
    '''
    margin = 1
    return K.mean(y_true * K.square(y_pred) + (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))

def create_base_network(input_dim):
    '''Base network to be shared (eq. to feature extraction).
    '''
    seq = Sequential()
    seq.add(Dense(128, input_shape=(input_dim,), activation='relu'))
    seq.add(Dropout(0.1))
    seq.add(Dense(128, activation='relu'))
    seq.add(Dropout(0.1))
    seq.add(Dense(128, activation='relu'))
    return seq

def euclidean_distance(vects): #replace this with the code from tensorflow
    x, y = vects
    return K.sqrt(K.sum(K.square(x - y), axis=1, keepdims=True))

    def eucl_dist_output_shape(shapes):
        shape1, shape2 = shapes
        return (shape1[0], 2) 

=============================主要部分================ ===============

        input_dim = 9216
        nb_epoch = 3
        # network definition
        base_network = create_base_network(input_dim)

        input_a = Input(shape=(input_dim,))
        input_b = Input(shape=(input_dim,))

        # because we re-use the same instance `base_network`,
        # the weights of the network
        # will be shared across the two branches
        processed_a = base_network(input_a)
        processed_b = base_network(input_b)

        distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b])

        model = Model(inputs=[input_a, input_b], outputs=distance)

        # train

        model.compile(loss=contrastive_loss, optimizer='RMSprop', metrics=['accuracy'])
        model.fit([tr_pair1_reshaped, tr_pair2_reshaped],y_train_categorical, epochs=nb_epoch, batch_size=64,verbose=1)

=============================================== ======================

The results I am getting are as follows:
Epoch 1/3
3000/3000 [==============================] - 1s 368us/step - loss: 3.8701 - acc: 0.5000
Epoch 2/3
3000/3000 [==============================] - 1s 169us/step - loss: 0.5310 - acc: 0.5000
Epoch 3/3
3000/3000 [==============================] - 1s 167us/step - loss: 0.4727 - acc: 0.5000

所以这里的目标是图像匹配,因此二进制分类。这里50%的准确度可能意味着根本没有学习。我使用to_categorical来匹配或不匹配标签。我也尝试过contrastive_loss和categorical_crossentropy损失函数,但结果保持不变," adam"和" rmsProp"优化器也没有任何区别。培训总数约为40k。所以我也尝试了不同的批量大小,没有任何区别。那么我在哪里挖掘问题的根源呢?有人对我有任何暗示吗?我会非常感激。 :)

1 个答案:

答案 0 :(得分:0)

您的网络给出了一个欧几里德距离的输出,该距离是一个连续变量,而您的标签是离散的,(根据我的说法)1将是相似的对,而0将是不同的对,因此在计算距离之后,您应包括在计算距离后,最终的乙状结肠层具有1个单位。所以您的模型应该是:

distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b])

output = Dense(1,activation='sigmoid')(distance)
model = Model(inputs=[input_a, input_b], outputs=output)

在此模型中,基于距离的S形给出了一对图像是否相同的概率