在keras后端中将矩阵与其他不同形状的矩阵相乘

时间:2017-04-04 07:40:28

标签: deep-learning keras attention-model

我正在尝试实施基于this model的注意力模型 但是我希望我的模型不只是看一帧来决定那个帧的注意力,我想要一个模型来尝试查看整个序列的帧。所以我正在做的是将每个帧乘以一个序列向量,这是lstm的输出(return_sequence = False)

这些是修改后的功能:

def build(self, input_shape):

    self.W = self.add_weight((input_shape[-1],),
                             initializer=self.init,
                             name='{}_W'.format(self.name))
    if self.lstm_size is None:
        self.lstm_size = input_shape[-1]
    self.vec_lstm = LSTM(self.lstm_size, return_sequences=False)
    self.vec_lstm.build(input_shape)
    self.seq_lstm = LSTM(self.lstm_size, return_sequences=True)
    self.seq_lstm.build(input_shape)
    self.trainable_weights = [self.W]+self.vec_lstm.trainable_weights + self.seq_lstm.trainable_weights
    super(Attention2, self).build(input_shape)  # Be sure to call this somewhere!

def call(self, x, mask=None):
    vec = self.vec_lstm(x)
    seq = self.seq_lstm(x)

#
    eij = # combine seq and vec somehow?
#
    eij = K.dot(eij,self.W)
    eij = K.tanh(eij)
    a = K.exp(eij)
    a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
    a = K.expand_dims(a)
    weighted_input = x * a
    attention = K.sum(weighted_input, axis=1)
    return attention

组合2个矩阵的天真代码是:

eij = np.zeros((batch_size,sequence_length,frame_size))
for i,one_seq in enumerate(seq):    
    for j,timestep in enumerate(one_seq):
        eij[i,j] = timestep*vec[i]  

我很感激与keras后端实现这一点的帮助。

谢谢!

1 个答案:

答案 0 :(得分:0)

这似乎提供了我想要的结果:

vec = vec_lstm(x)        
seq = seq_lstm(x)
repeat_vec = K.repeat(vec,seq.shape[1])
eij = seq * repeat_vec