如何将Keras顺序API转换为功能性API

时间:2019-01-24 19:41:54

标签: keras embedding

我是nlp的新手,正在尝试从该站点学习跳过语法:

https://towardsdatascience.com/understanding-feature-engineering-part-4-deep-learning-methods-for-text-data-96c44370bbfa

我正在尝试实现skip gram,我遇到的问题是下面的代码是keras的顺序API,并且不支持合并(稍后在下面的代码中显示)

word_model.add(Embedding(vocab_size, embed_size,
                         embeddings_initializer="glorot_uniform",
                         input_length=1))
word_model.add(Reshape((embed_size, )))

所以我正在尝试将其转换为功能性的API

word_model = Embedding(input_dim=vocab_size, output_dim=embed_size,
                         embeddings_initializer="glorot_uniform",
                         input_length=1)

word_model = Reshape(target_shape= (embed_size,))(word_model)

但是我遇到以下错误 意外发现类型为<class 'keras.layers.embeddings.Embedding'>的实例。应该是符号张量实例。

我曾尝试重塑图层和背景,但仍然无法正常工作。

请提出如何转换或使其工作的建议。

提前谢谢。

from keras.layers import Merge
from keras.layers.core import Dense, Reshape
from keras.layers.embeddings import Embedding
from keras.models import Sequential

# build skip-gram architecture
word_model = Sequential()
word_model.add(Embedding(vocab_size, embed_size,
                         embeddings_initializer="glorot_uniform",
                         input_length=1))
word_model.add(Reshape((embed_size, )))

context_model = Sequential()
context_model.add(Embedding(vocab_size, embed_size,
                  embeddings_initializer="glorot_uniform",
                  input_length=1))
context_model.add(Reshape((embed_size,)))

model = Sequential()
model.add(Merge([word_model, context_model], mode="dot"))
model.add(Dense(1, kernel_initializer="glorot_uniform", activation="sigmoid"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")

# view model summary
print(model.summary())

# visualize model structure
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot

SVG(model_to_dot(model, show_shapes=True, show_layer_names=False, 
                 rankdir='TB').create(prog='dot', format='svg'))

1 个答案:

答案 0 :(得分:1)

您首先需要一个输入层,然后将其传递给嵌入层。以下是使用两个输入的示例(一个用于目标单词,一个用于上下文单词):

target_input = keras.layers.Input(input_shape)
context_input = keras.layers.Input(input_shape)

target_emb = Embedding(input_dim=vocab_size, output_dim=embed_size,
                         embeddings_initializer="glorot_uniform",
                         input_length=1)(target_input)
target_emb = Reshape((embed_size,))(target_emb)

context_emb = Embedding(input_dim=vocab_size, output_dim=embed_size,
                         embeddings_initializer="glorot_uniform",
                         input_length=1)(context_input)
context_emb = Reshape((embed_size,))(target_emb)

# Add the remaining layers here...

model = keras.models.Model(inputs=[target_input, context_input], outputs=output)