我使用带有关注层的LSTM创建了文本分类模型。我的模型做得很好,效果很好,但是我无法在评论(输入文本)中显示注意力权重和每个单词的重要性/注意力。 用于该模型的代码是:
def dot_product(x, kernel):
if K.backend() == 'tensorflow':
return K.squeeze(K.dot(x, K.expand_dims(kernel)),axis=-1)
else:
return K.dot(x, kernel)
class AttentionWithContext(Layer):
"""
Attention operation, with a context/query vector, for temporal data.
"Hierarchical Attention Networks for Document Classification"
by using a context vector to assist the attention
# Input shape
3D tensor with shape: (samples, steps, features).
# Output shape
2D tensor with shape: (samples, features).
How to use:
Just put it on top of an RNN Layer (GRU/LSTM/SimpleRNN) with return_sequences=True.
The dimensions are inferred based on the output shape of the RNN.
Note: The layer has been tested with Keras 2.0.6
Example:
model.add(LSTM(64, return_sequences=True))
model.add(AttentionWithContext())
# next add a Dense layer (for classification/regression) or whatever
"""
def __init__(self,
W_regularizer=None, u_regularizer=None, b_regularizer=None,
W_constraint=None, u_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
self.init = initializers.get('glorot_uniform')
self.W_regularizer = regularizers.get(W_regularizer)
self.u_regularizer = regularizers.get(u_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.u_constraint = constraints.get(u_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
super(AttentionWithContext, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.W = self.add_weight((input_shape[-1], input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((input_shape[-1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
self.u = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_u'.format(self.name),
regularizer=self.u_regularizer,
constraint=self.u_constraint)
super(AttentionWithContext, self).build(input_shape)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
uit = dot_product(x, self.W)
if self.bias:
uit += self.b
uit = K.tanh(uit)
ait = dot_product(uit, self.u)
a = K.exp(ait)
# apply mask after the exp. will be re-normalized next
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in theano
a *= K.cast(mask, K.floatx())
# in some cases especially in the early stages of training the sum may be almost zero
# and this results in NaN's. A workaround is to add a very small positive number ε to the sum.
# a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[-1]
EMBEDDING_DIM=100
max_seq_len=118
bach_size = 256
num_epochs=50
from keras.models import Model
from keras.layers import Dense, Embedding, Input
from keras.layers import LSTM, Bidirectional, Dropout
def BidLstm():
#inp = Input(shape=(118,100))
#x = Embedding(max_features, embed_size, weights=[embedding_matrix],
#trainable=False)(inp)
model1=Sequential()
model1.add(Dense(512,input_shape=(118,100)))
model1.add(Activation('relu'))
#model1.add(Flatten())
#model1.add(BatchNormalization(input_shape=(100,)))
model1.add(Bidirectional(LSTM(100, activation="relu",return_sequences=True)))
model1.add(Dropout(0.1))
model1.add(TimeDistributed(Dense(200)))
model1.add(AttentionWithContext())
model1.add(Dropout(0.25))
model1.add(Dense(4, activation="softmax"))
model1.compile(loss='sparse_categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
model1.summary()
return model1
答案 0 :(得分:1)
您可以使用自定义图层的get_weights()
方法来获取所有权重的列表。您可以找到更多信息here。
您需要在模型创建期间对代码进行以下修改:
model1.add(TimeDistributed(Dense(200)))
atn = AttentionWithContext()
model1.add(atn)
然后在训练后,只需使用:
atn.get_weights()[index]
提取权重矩阵W
作为numpy
数组(我认为index
应该设置为0
,但是您必须自己尝试一下) 。然后,您可以使用pyplot
的{{1}} / imshow
method来显示矩阵。
答案 1 :(得分:1)
请在此处查看github存储库:https://github.com/FlorisHoogenboom/keras-han-for-docla
首先在关注层中明确定义权重计算 第二步提取前一层的输出和关注层权重,然后将其乘以单词Attent weights
答案 2 :(得分:0)
阅读了上面的综合答案之后,我终于了解了如何提取注意力层的权重。总的来说,@ Li Xiang和@Okorimi Manoury的想法都是正确的。对于@Okorimi Manoury的代码段,来自以下链接:Textual attention visualization。
现在,让我逐步说明该过程:
(1)。您应该有一个训练有素的模型,您需要加载模型并提取注意层的权重。
要提取某些图层权重,可以使用model.summary()
来检查模型架构。然后,您可以使用:
layer_weights = model.layers[3].get_weights() #suppose your attention layer is the third layer
layer_weights
是一个列表,例如,对于HAN attention的单词级关注,layer_weights
的列表具有三个元素:W,b和u。
换句话说,layer_weights[0] = W, layer_weights[1] = b, and layer_weights[2] = u
。
(2)。您还需要在关注图层之前获取图层输出。在此示例中,我们需要获取第二层输出。您可以使用以下代码进行操作:
new_model = Model(inputs=model.input, outputs=model.layers[2].output)
output_before_att = new_model.predict(x_test_sample) #extract layer output
(3)。检查详细信息:假设您输入的文本段为100个单词,尺寸为300(输入为[100,300]),在第二层之后,尺寸为128。然后,output_before_att
的形状为[ 100,128]。相应地,layer_weights[0]
(W)为[128,128],layer_weights[1]
(b)为[1,128],layer_weights[2]
(u)为[1,128]。然后,我们需要以下代码:
eij = np.tanh(np.dot(output_before_att, layer_weights[0]) + layer_weights[1]) #Eq.(5) in the paper
eij = np.dot(eij, layer_weights[2]) #Eq.(6)
eij = eij.reshape((eij.shape[0], eij.shape[1])) # reshape the vector
ai = np.exp(eij) #Eq.(6)
weights = ai / np.sum(ai) # Eq.(6)
weights
是一个列表(100维),每个元素都是100个输入单词的注意权重(重要性)。之后,您可以可视化注意权重。
希望我的解释可以为您提供帮助。
答案 3 :(得分:-1)
谢谢您的编辑。 您的解决方案返回注意层的权重,但是我正在寻找单词权重。
我找到了解决此问题的其他方法:
1.define函数来计算注意力权重:
def cal_att_weights(output, att_w):
#if model_name == 'HAN':
eij = np.tanh(np.dot(output[0], att_w[0]) + att_w[1])
eij = np.dot(eij, att_w[2])
eij = eij.reshape((eij.shape[0], eij.shape[1]))
ai = np.exp(eij)
weights = ai / np.sum(ai)
return weights
from keras import backend as K
sent_before_att = K.function([model1.layers[0].input,K.learning_phase()], [model1.layers[2].output])
sent_att_w = model1.layers[5].get_weights()
test_seq=np.array(userinp)
test_seq=np.array(test_seq).reshape(1,118,100)
out = sent_before_att([test_seq, 0])