我一直在将Keras默认嵌入层与词嵌入一起使用在我的体系结构中。架构看起来像这样-
left_input = Input(shape=(max_seq_length,), dtype='int32')
right_input = Input(shape=(max_seq_length,), dtype='int32')
embedding_layer = Embedding(len(embeddings), embedding_dim, weights=[embeddings], input_length=max_seq_length,
trainable=False)
# Since this is a siamese network, both sides share the same LSTM
shared_lstm = LSTM(n_hidden, name="lstm")
left_output = shared_lstm(encoded_left)
right_output = shared_lstm(encoded_right)
我想用ELMo嵌入替换嵌入层。因此,我使用了一个自定义嵌入层-在此仓库中-https://github.com/strongio/keras-elmo/blob/master/Elmo%20Keras.ipynb。嵌入层看起来像这样-
class ElmoEmbeddingLayer(Layer):
def __init__(self, **kwargs):
self.dimensions = 1024
self.trainable=True
super(ElmoEmbeddingLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.elmo = hub.Module('https://tfhub.dev/google/elmo/2', trainable=self.trainable,
name="{}_module".format(self.name))
self.trainable_weights += K.tf.trainable_variables(scope="^{}_module/.*".format(self.name))
super(ElmoEmbeddingLayer, self).build(input_shape)
def call(self, x, mask=None):
result = self.elmo(K.squeeze(K.cast(x, tf.string), axis=1),
as_dict=True,
signature='default',
)['default']
return result
def compute_mask(self, inputs, mask=None):
return K.not_equal(inputs, '--PAD--')
def compute_output_shape(self, input_shape):
return (input_shape[0], self.dimensions)
我更改了新嵌入层的体系结构。
# The visible layer
left_input = Input(shape=(1,), dtype="string")
right_input = Input(shape=(1,), dtype="string")
embedding_layer = ElmoEmbeddingLayer()
# Embedded version of the inputs
encoded_left = embedding_layer(left_input)
encoded_right = embedding_layer(right_input)
# Since this is a siamese network, both sides share the same LSTM
shared_lstm = LSTM(n_hidden, name="lstm")
left_output = shared_gru(encoded_left)
right_output = shared_gru(encoded_right)
但是我遇到错误-
ValueError:输入0与lstm层不兼容:预期ndim = 3,找到ndim = 2
我在这里做错了什么?
答案 0 :(得分:2)
Elmo嵌入层为每个输入输出一个嵌入(因此输出形状为(batch_size, dim)
),而您的LSTM需要一个序列(即形状(batch_size, seq_length, dim)
)。我认为在Elmo嵌入层之后再加上LSTM层并没有多大意义,因为Elmo已经使用LSTM嵌入了单词序列。
答案 1 :(得分:1)
我还将该存储库用作构建CustomELMo + BiLSTM + CRF模型的指南,并且我需要将dict查找更改为“ elmo”而不是“ default”。正如Anna Krogager指出的那样,当dict查找为“ default”时,输出为(batch_size,dim),这对于LSTM而言还不够。但是,当dict查找为['elmo']时,该层将返回正确尺寸的张量,即形状(batch_size,max_length,1024)。
自定义ELMo层:
class ElmoEmbeddingLayer(Layer):
def __init__(self, **kwargs):
self.dimensions = 1024
self.trainable = True
super(ElmoEmbeddingLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.elmo = hub.Module('https://tfhub.dev/google/elmo/2', trainable=self.trainable,
name="{}_module".format(self.name))
self.trainable_weights += K.tf.trainable_variables(scope="^{}_module/.*".format(self.name))
super(ElmoEmbeddingLayer, self).build(input_shape)
def call(self, x, mask=None):
result = self.elmo(K.squeeze(K.cast(x, tf.string), axis=1),
as_dict=True,
signature='default',
)['elmo']
print(result)
return result
# def compute_mask(self, inputs, mask=None):
# return K.not_equal(inputs, '__PAD__')
def compute_output_shape(self, input_shape):
return input_shape[0], 48, self.dimensions
该模型的构建如下:
def build_model(): # uses crf from keras_contrib
input = layers.Input(shape=(1,), dtype=tf.string)
model = ElmoEmbeddingLayer(name='ElmoEmbeddingLayer')(input)
model = Bidirectional(LSTM(units=512, return_sequences=True))(model)
crf = CRF(num_tags)
out = crf(model)
model = Model(input, out)
model.compile(optimizer="rmsprop", loss=crf_loss, metrics=[crf_accuracy, categorical_accuracy, mean_squared_error])
model.summary()
return model
我希望我的代码对您有用,即使它不是完全相同的模型。请注意,当它抛出
时,我不得不将其注释掉InvalidArgumentError: Incompatible shapes: [32,47] vs. [32,0] [[{{node loss/crf_1_loss/mul_6}}]]
其中32是批处理大小,47比我指定的max_length小1(大概意味着它是在填充令牌本身)。我还没有找出导致该错误的原因,因此对您和您的模型都可以。但是,我注意到您正在使用GRU,并且在存储库中有一个尚未解决的有关添加GRU的问题。所以我很好奇您是否也有这种感觉。