具有自定义图层的自定义模型的Keras负载模型-变压器文档示例

时间:2020-11-10 13:01:55

标签: python tensorflow keras serialization transformer

我正在运行以下示例:

https://keras.io/examples/nlp/text_classification_with_transformer/

我已经按照描述创建并训练了一个模型,并且效果很好:

inputs = layers.Input(shape=(maxlen,))
embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)
x = embedding_layer(inputs)
transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
x = transformer_block(x,training=True)
x = layers.GlobalAveragePooling1D()(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(20, activation="relu")(x)
x = layers.Dropout(0.1)(x)
outputs = layers.Dense(2, activation="softmax")(x)

model = keras.Model(inputs=inputs, outputs=outputs)


"""
## Train and Evaluate
"""

model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])
history = model.fit(
    x_train, y_train, batch_size=1024, epochs=1, validation_data=(x_val, y_val)
)

model.save('SPAM.h5')

如何在Keras中正确保存和加载此类自定义模型?

我尝试过

 best_model=tf.keras.models.load_model('SPAM.h5')
 ValueError: Unknown layer: TokenAndPositionEmbedding

,但是模型似乎缺少自定义图层。但是以下内容也不起作用

best_model=tf.keras.models.load_model('SPAM.h5',custom_objects={"TokenAndPositionEmbedding": TokenAndPositionEmbedding()})
 
TypeError: __init__() missing 3 required positional arguments:
 'maxlen', 'vocab_size', and 'embed_dim'

通过类也无法解决。

best_model=tf.keras.models.load_model('SPAM.h5',
 custom_objects={"TokenAndPositionEmbedding": TokenAndPositionEmbedding})
 TypeError: __init__() got an unexpected keyword argument 'name'



 best_model=tf.keras.models.load_model('SPAM.h5',
{"TokenAndPositionEmbedding":
TokenAndPositionEmbedding,'TransformerBlock':TransformerBlock,
'MultiHeadSelfAttention':MultiHeadSelfAttention})

0 个答案:

没有答案