向lstm自动编码器添加嵌入层时出现错误

时间:2019-06-03 20:16:43

标签: tensorflow keras lstm autoencoder word-embedding

我有一个运行良好的seq2seq模型。我想在该网络中添加一个嵌入层,但遇到错误。

这是我使用预训练词嵌入的体系结构,效果很好(实际上,代码与here可用的代码几乎相同,但我想在模型中包括Embedding层,而不是使用预训练的嵌入向量) :

$ git diff <commit X SHA> --name-only
file_one
file_two
file_three

这是摘要:

LATENT_SIZE = 20

inputs = Input(shape=(SEQUENCE_LEN, EMBED_SIZE), name="input")

encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(inputs)
encoded = Lambda(rev_ent)(encoded)
decoded = RepeatVector(SEQUENCE_LEN, name="repeater")(encoded)
decoded = Bidirectional(LSTM(EMBED_SIZE, return_sequences=True), merge_mode="sum", name="decoder_lstm")(decoded)
autoencoder = Model(inputs, decoded)
autoencoder.compile(optimizer="sgd", loss='mse')
autoencoder.summary()
NUM_EPOCHS = 1

num_train_steps = len(Xtrain) // BATCH_SIZE
num_test_steps = len(Xtest) // BATCH_SIZE

checkpoint = ModelCheckpoint(filepath=os.path.join('Data/', "simple_ae_to_compare"), save_best_only=True)
history = autoencoder.fit_generator(train_gen, steps_per_epoch=num_train_steps, epochs=NUM_EPOCHS, validation_data=test_gen, validation_steps=num_test_steps, callbacks=[checkpoint])

当我更改代码以添加嵌入层时,如下所示:

Layer (type)                 Output Shape              Param #   
=================================================================
input (InputLayer)           (None, 45, 50)            0         
_________________________________________________________________
encoder_lstm (Bidirectional) (None, 20)                11360     
_________________________________________________________________
lambda_1 (Lambda)            (512, 20)                 0         
_________________________________________________________________
repeater (RepeatVector)      (512, 45, 20)             0         
_________________________________________________________________
decoder_lstm (Bidirectional) (512, 45, 50)             28400  

我收到此错误:

inputs = Input(shape=(SEQUENCE_LEN,), name="input")

embedding = Embedding(output_dim=EMBED_SIZE, input_dim=VOCAB_SIZE, input_length=SEQUENCE_LEN, trainable=True)(inputs)
encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(embedding)

所以我的问题是我的模型出了什么问题?

更新

因此,在训练阶段会出现此错误。我还检查了要馈送到模型的数据的维数,它是expected decoder_lstm to have 3 dimensions, but got array with shape (512, 45) ,显然没有特征数,这里没有(61598, 45)

但是为什么在解码器部分会出现此错误?因为在编码器部分中我已经包含了嵌入层,所以它很好。但是当它到达解码器部分时却没有嵌入层,因此无法正确地将其整形为三维。

现在问题来了,为什么在类似的代码中不会发生这种情况? 这是我的观点,如果我错了,请纠正我。因为Seq2Seq代码通常用于翻译,汇总。并且在这些代码中,在解码器部分中也有输入(在翻译的情况下,有其他语言输入到解码器,因此在解码器部分中嵌入的想法很有意义)。 最后,这里没有单独的输入,这就是为什么我不需要在解码器部分进行任何单独的嵌入的原因。但是,我不知道如何解决该问题,我只知道为什么会这样:|

Update2

这是我输入到模型中的数据:

Embed_dim

并且parsed_sentences是61598个句子,被填充。

此外,这是我在模型中作为Lambda层所拥有的层,我只是在这里添加了它,以防它有任何作用:

   sent_wids = np.zeros((len(parsed_sentences),SEQUENCE_LEN),'int32')
sample_seq_weights = np.zeros((len(parsed_sentences),SEQUENCE_LEN),'float')
for index_sentence in range(len(parsed_sentences)):
    temp_sentence = parsed_sentences[index_sentence]
    temp_words = nltk.word_tokenize(temp_sentence)
    for index_word in range(SEQUENCE_LEN):
        if index_word < sent_lens[index_sentence]:
            sent_wids[index_sentence,index_word] = lookup_word2id(temp_words[index_word])
        else:
            sent_wids[index_sentence, index_word] = lookup_word2id('PAD')

def sentence_generator(X,embeddings, batch_size, sample_weights):
    while True:
        # loop once per epoch
        num_recs = X.shape[0]
        indices = np.random.permutation(np.arange(num_recs))
        # print(embeddings.shape)
        num_batches = num_recs // batch_size
        for bid in range(num_batches):
            sids = indices[bid * batch_size : (bid + 1) * batch_size]
            temp_sents = X[sids, :]
            Xbatch = embeddings[temp_sents]
            weights = sample_weights[sids, :]
            yield Xbatch, Xbatch
LATENT_SIZE = 60

train_size = 0.95
split_index = int(math.ceil(len(sent_wids)*train_size))
Xtrain = sent_wids[0:split_index, :]
Xtest = sent_wids[split_index:, :]
train_w = sample_seq_weights[0: split_index, :]
test_w = sample_seq_weights[split_index:, :]
train_gen = sentence_generator(Xtrain, embeddings, BATCH_SIZE,train_w)
test_gen = sentence_generator(Xtest, embeddings , BATCH_SIZE,test_w)

感谢您的帮助:)

1 个答案:

答案 0 :(得分:1)

我在Google colab(TensorFlow版本1.13.1)上尝试了以下示例,

from tensorflow.python import keras
import numpy as np

SEQUENCE_LEN = 45
LATENT_SIZE = 20
EMBED_SIZE = 50
VOCAB_SIZE = 100

inputs = keras.layers.Input(shape=(SEQUENCE_LEN,), name="input")

embedding = keras.layers.Embedding(output_dim=EMBED_SIZE, input_dim=VOCAB_SIZE, input_length=SEQUENCE_LEN, trainable=True)(inputs)

encoded = keras.layers.Bidirectional(keras.layers.LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(embedding)
decoded = keras.layers.RepeatVector(SEQUENCE_LEN, name="repeater")(encoded)
decoded = keras.layers.Bidirectional(keras.layers.LSTM(EMBED_SIZE, return_sequences=True), merge_mode="sum", name="decoder_lstm")(decoded)
autoencoder = keras.models.Model(inputs, decoded)
autoencoder.compile(optimizer="sgd", loss='mse')
autoencoder.summary()

然后使用一些随机数据训练模型,


x = np.random.randint(0, 90, size=(10, 45))
y = np.random.normal(size=(10, 45, 50))
history = autoencoder.fit(x, y, epochs=NUM_EPOCHS)

此解决方案效果很好。我觉得问题可能出在您为MSE计算输入标签/输出的方式上。

更新

上下文

在原始问题中,您尝试使用seq2seq模型重建单词嵌入,在该模型中,嵌入是固定的并经过预训练。但是,您希望将可训练的嵌入层用作模型的一部分,因此很难对此问题进行建模。因为您没有固定的目标(即,由于嵌入层在变化,所以目标每次优化都会更改一次)。此外,这会导致非常不稳定的优化问题,因为目标一直在变化。

修复您的代码

如果执行以下操作,则应该能够使代码正常工作。这里的embeddings是预先训练的GloVe向量numpy.ndarray

def sentence_generator(X, embeddings, batch_size):
    while True:
        # loop once per epoch
        num_recs = X.shape[0]
        embed_size = embeddings.shape[1]
        indices = np.random.permutation(np.arange(num_recs))
        # print(embeddings.shape)
        num_batches = num_recs // batch_size
        for bid in range(num_batches):
            sids = indices[bid * batch_size : (bid + 1) * batch_size]
            # Xbatch is a [batch_size, seq_length] array
            Xbatch = X[sids, :] 

            # Creating the Y targets
            Xembed = embeddings[Xbatch.reshape(-1),:]
            # Ybatch will be [batch_size, seq_length, embed_size] array
            Ybatch = Xembed.reshape(batch_size, -1, embed_size)
            yield Xbatch, Ybatch