我的问题是关于如何在Keras中使用曼哈顿距离。 我正在从事有关句子相似性指标的文本分类项目。因此,我考虑使用曼哈顿距离进行相似度计算。 损失函数如下所示:
def exponent_neg_manhattan_distance(left, right):
''' Helper function for the similarity estimate of the LSTMs outputs'''
return K.exp(-K.sum(K.abs(left - right), axis=1, keepdims=True))
def manhattan_distance(left, right):
''' Helper function for the similarity estimate of the LSTMs outputs'''
return K.sum(K.abs(left - right), axis=1, keepdims=True)
# The visible layer
left_input = Input(shape=(max_seq_length,), dtype='int32')
right_input = Input(shape=(max_seq_length,), dtype='int32')
embedding_layer = Embedding(len(embeddings), embedding_dim, weights=[embeddings], input_length=max_seq_length, trainable=False)
# Embedded version of the inputs
encoded_left = embedding_layer(left_input)
encoded_right = embedding_layer(right_input)
# Since this is a siamese network, both sides share the same LSTM
shared_lstm = LSTM(n_hidden)
left_output = shared_lstm(encoded_left)
right_output = shared_lstm(encoded_right)
# Calculates the distance as defined by the MaLSTM model
malstm_distance = Lambda(function=lambda x:
exponent_neg_manhattan_distance(x[0], x[1]),output_shape=lambda x: (x[0]
[0], 1))([left_output, right_output])
# Pack it all up into a model
malstm = Model([left_input, right_input], [malstm_distance])
# Adadelta optimizer, with gradient clipping by norm
optimizer = Adadelta(clipnorm=gradient_clipping_norm)
malstm.compile(loss='mean_squared_error', optimizer=optimizer, metrics=
['accuracy'])
malstm_trained = malstm.fit([X_train['left'], X_train['right']], Y_train,
batch_size=batch_size, nb_epoch=n_epoch,
validation_data=([X_validation['left'],
X_validation['right']], Y_validation),
callbacks=[checkpointer])
但是,此功能exponent_neg_manhattan_distance()
实际上不能很好地执行。我在互联网上进行搜索,发现曼哈顿距离的原始版本是这样写的:manhattan_distance
这样,在我的模型中Accuracy就非常出色了。
到目前为止,我不应该使用哪个模型,以及如何解释exp破坏了模型的预测?
请帮助解决此问题。
答案 0 :(得分:0)
好吧,我可能得到了答案。 exp()是归一化函数。通常将距离归一化为[0,1],归一化可以提高收敛性,但这不是必须的。就我而言,归一化更改精度可能是一些复杂的原因。我延长了训练时间,结果应该是一样的。但是由于数据中浮点运算的丢失,规范化导致精度误差。