我正在尝试对视频序列分类进行软关注。由于有很多有关NLP的实现和示例,因此我尝试遵循此模式,但仅针对视频1。基本上是一个LSTM,中间有一个注意模型。
1 https://blog.heuritech.com/2016/01/20/attention-mechanism/
下面是我关注层的代码,我不确定它是否正确实现。
def attention_layer(self, input, context):
# Input is a Tensor: [batch_size, lstm_units]
# Input (Seq_length, batch_size, lstm_units)
# Context is a LSTMStateTuple: [batch_size, lstm_units]. Hidden_state, output = StateTuple
hidden_state, _ = context
weights_y = tf.get_variable("att_weights_Y", [self.lstm_units, self.lstm_units], initializer=tf.contrib.layers.xavier_initializer())
weights_c = tf.get_variable("att_weights_c", [self.lstm_units, self.lstm_units], initializer=tf.contrib.layers.xavier_initializer())
z_ = []
for feat in input:
# Equation => M = tanh(Wc c + Wy y)
Wcc = tf.matmul(hidden_state, weights_c)
Wyy = tf.matmul(feat, weights_y)
m = tf.add(Wcc, Wyy)
m = tf.tanh(m, name='M_matrix')
# Equation => s = softmax(m)
s = tf.nn.softmax(m, name='softmax_att')
z = tf.multiply(feat, s)
z_.append(z)
out = tf.stack(z_, axis=1)
out = tf.reduce_sum(out, 1)
return out, s
因此,在我的LSTM之间(或我的2个LSTM的开始)添加此层会使训练如此缓慢。更具体地说,在声明优化器时需要花费很多时间:
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
我的问题是: