...
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(size, return_sequences=True, dropout_W=0.2 dropout_U=0.2))
model.add(GlobalAveragePooling1D())
model.add(Dense(1))
model.add(Activation('sigmoid'))
....
我需要能够在LSTM层之后的样本中的所有时间步长中取出向量的均值或最大值,然后将此均值或最大向量提供给Keras中的密集层。
我认为timedistributedmerge
能够做到这一点,但已被弃用。使用return_sequences=True
我可以在LSTM层之后获取样本中所有时间步长的向量。但是,GlobalAveragePooling1D()
与屏蔽不兼容,它考虑所有时间步骤,而我只需要非屏蔽时间步。
我看到推荐Lambda
图层的帖子,但这些帖子也没有考虑到屏蔽。任何帮助将不胜感激。
答案 0 :(得分:3)
为了使x中的掩码值等于零,你可以这样做:
class MeanPool(Layer):
def __init__(self, **kwargs):
self.supports_masking = True
super(MeanPool, self).__init__(**kwargs)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
if mask is not None:
# mask (batch, time)
mask = K.cast(mask, K.floatx())
# mask (batch, time, 'x')
mask = mask.dimshuffle(0, 1, 'x')
# to make the masked values in x be equal to zero
x = x * mask
return K.sum(x, axis=1) / K.sum(mask, axis=1)
def get_output_shape_for(self, input_shape):
# remove temporal dimension
return input_shape[0], input_shape[2]
答案 1 :(得分:3)
Jacoxu的回答是正确的。但是如果你使用一个张量流后端用于keras,那么Tensor类型不支持dimshuffle函数,请尝试这样做。
def call(self, x, mask=None):
if mask is not None:
# mask (batch, time)
mask = K.cast(mask, K.floatx())
# mask (batch, x_dim, time)
mask = K.repeat(mask, x.shape[-1])
# mask (batch, time, x_dim)
mask = tf.transpose(mask, [0,2,1])
x = x * mask
return K.sum(x, axis=1) / K.sum(mask, axis=1)
答案 2 :(得分:2)
由于平均合并只是在一个轴上进行均值,因此您只需要更正均值中的元素数量,因为损失屏蔽是在最后处理的,而不是在这里。你可以这样做:
class GlobalAveragePooling1DMasked(GlobalAveragePooling1D):
def call(self, x, mask=None):
if mask != None:
return K.sum(x, axis=1) / K.sum(mask, axis=1)
else:
return super().call(x)
答案 3 :(得分:2)
这就是我在Keras 2上的表现(借鉴所有答案,并修复尺寸):
class MeanPool(Layer):
def __init__(self, **kwargs):
self.supports_masking = True
super(MeanPool, self).__init__(**kwargs)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
if mask is not None:
# mask (batch, time)
mask = K.cast(mask, K.floatx())
# mask (batch, x_dim, time)
mask = K.repeat(mask, x.shape[-1])
# mask (batch, time, x_dim)
mask = tf.transpose(mask, [0,2,1])
x = x * mask
return K.sum(x, axis=1) / K.sum(mask, axis=1)
def compute_output_shape(self, input_shape):
# remove temporal dimension
return (input_shape[0], input_shape[2])