我一直试图根据an example from the Deep Learning with Keras book来宽松地复制句子自动编码器。
我对示例进行了重新编码,以使用嵌入层而不是句子生成器,并使用fit
和fit_generator
。
我的代码如下:
df_train_text = df['string']
max_length = 80
embedding_dim = 300
latent_dim = 512
batch_size = 64
num_epochs = 10
# prepare tokenizer
t = Tokenizer(filters='')
t.fit_on_texts(df_train_text)
word_index = t.word_index
vocab_size = len(t.word_index) + 1
# integer encode the documents
encoded_train_text = t.texts_to_matrix(df_train_text)
padded_train_text = pad_sequences(encoded_train_text, maxlen=max_length, padding='post')
padding_train_text = np.asarray(padded_train_text, dtype='int32')
embeddings_index = {}
f = open('/Users/embedding_file.txt')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
#Found 51328 word vectors.
embedding_matrix = np.zeros((vocab_size, embedding_dim))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
embedding_layer = Embedding(vocab_size,
embedding_dim,
weights=[embedding_matrix],
input_length=max_length,
trainable=False)
inputs = Input(shape=(max_length,), name="input")
embedding_layer = embedding_layer(inputs)
encoder = Bidirectional(LSTM(latent_dim), name="encoder_lstm", merge_mode="sum")(embedding_layer)
decoder = RepeatVector(max_length)(encoder)
decoder = Bidirectional(LSTM(embedding_dim, name='decoder_lstm', return_sequences=True), merge_mode="sum")(decoder)
autoencoder = Model(inputs, decoder)
autoencoder.compile(optimizer="adam", loss="mse")
autoencoder.fit(padded_train_text, padded_train_text,
epochs=num_epochs,
batch_size=batch_size,
callbacks=[checkpoint])
我验证了我的图层形状与示例中的图层形状相同,但是当我尝试安装自动编码器时,出现以下错误:
ValueError: Error when checking target: expected bidirectional_1 to have 3 dimensions, but got array with shape (36320, 80)
我尝试过的其他操作包括将texts_to_matrix
切换为texts_to_sequence
并包装/不包装填充的字符串
我还遇到了this post,这似乎表明我正在错误地进行此操作。如我所编码的那样,是否可以将自动编码器与嵌入层配合起来?如果没有,有人可以帮助解释所提供示例与我的版本之间的根本区别吗?
编辑:我在最后一层中删除了return_sequences=True
参数,并收到以下错误:ValueError: Error when checking target: expected bidirectional_1 to have shape (300,) but got array with shape (80,)
更新后,我的图层形状为:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) (None, 80) 0
_________________________________________________________________
embedding_8 (Embedding) (None, 80, 300) 2440200
_________________________________________________________________
encoder_lstm (Bidirectional) (None, 512) 3330048
_________________________________________________________________
repeat_vector_8 (RepeatVecto (None, 80, 512) 0
_________________________________________________________________
bidirectional_8 (Bidirection (None, 300) 1951200
=================================================================
Total params: 7,721,448
Trainable params: 5,281,248
Non-trainable params: 2,440,200
_________________________________________________________________
我是否错过了RepeatVector
层与模型的最后一层之间的距离,因此我可以返回形状(None,80,300)而不是当前的形状(None,300)产生?
答案 0 :(得分:1)
Embedding
层将形状为(num_words,)
的整数(即单词索引)序列作为输入,并以(num_words, embd_dim)
的形式给出相应的嵌入作为输出。因此,在将Tokenizer
实例适合给定文本之后,您需要使用其texts_to_sequences()
方法将每个文本转换为整数序列:
encoded_train_text = t.texts_to_sequences(df_train_text)
此外,由于填充encoded_train_text
后将具有(num_samples, max_length)
的形状,因此网络的输出形状也必须具有相同的形状(即,由于我们正在创建自动编码器),因此您需要删除最后一层的return_sequences=True
参数。否则,它将给我们提供3D张量作为输出,这是没有意义的。
请注意,由于padded_train_text
已经是一个numpy数组,因此以下行是多余的(顺便说一下,您根本没有使用过padding_train_text
):
padding_train_text = np.asarray(padded_train_text, dtype='int32')