Albert_base:使用bert-for-tf2调用时,来自ckpt的权重未正确加载

时间:2019-12-17 09:31:54

标签: tensorflow checkpoint

我想通过进一步的mlm任务微调Albert_base,但我意识到没有为albert-base提供预训练的ckpt文件。因此,我的计划是将save_model(或从tf-hub加载的模型)转换为我自己的检查点,然后使用提供的代码(https://github.com/google-research/ALBERT/blob/master/run_pretraining.py)预训练基于albert的库。

在进行进一步的培训之前,要检查是否成功转换为ckpt,我将ckpt文件重新转换为save_model格式,并使用bert-for-tf2(https://github.com/kpe/bert-for-tf2/tree/master/bert)将其加载为keras层。 但是,当我加载重新转换的albert_base时,其嵌入与从原始albert_base加载的嵌入不同。

这是我将原始的save_model转换为ckpt,然后又转换回save_model的方式。 (我在colab上使用tf version = 1.15.0)

"""
Convert tf-hub module to checkpoint files.
"""
albert_module = hub.Module(
    "https://tfhub.dev/google/albert_base/2",
    trainable=True)
saver = tf.train.Saver()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
saver.save(sess, './albert/model_ckpt/albert_base')

"""
Save model loaded from ckpt in saved_model format.
"""
from tensorflow.python.saved_model import tag_constants

graph = tf.Graph()
with tf.Session(graph=graph) as sess:
    # Restore from checkpoint
    loader = tf.train.import_meta_graph('./albert/model_ckpt/albert_base.meta')
    loader.restore(sess, tf.train.latest_checkpoint('./albert/model_ckpt/'))

    # Export checkpoint to SavedModel
    builder = tf.saved_model.builder.SavedModelBuilder('./albert/saved_model')
    builder.add_meta_graph_and_variables(sess,
                                         [],
                                         strip_default_attrs=True)
    builder.save()    

使用bert-for-tf2,我将albert_base加载为keras层并构建一个简单的模块:

def load_pretrained_albert():
    model_name = "albert_base"
    model_dir = bert.fetch_tfhub_albert_model(model_name, ".models")
    model_params = bert.albert_params(model_name)
    l_bert = bert.BertModelLayer.from_params(model_params, name="albert")

    # use in Keras Model here, and call model.build()
    max_seq_len = 128

    l_input_ids = Input(shape=(max_seq_len,), dtype='int32', name="l_input_ids")

    output = l_bert(l_input_ids)                              # output: [batch_size, max_seq_len, hidden_size]
    pooled_output = AveragePooling1D(pool_size=max_seq_len, data_format="channels_last")(output)
    pooled_output = Flatten()(pooled_output)


    model = Model(inputs=[l_input_ids], outputs=[pooled_output])
    model.build(input_shape=(None, max_seq_len))

    bert.load_albert_weights(l_bert, model_dir)

    return model

上面的代码从saved_model加载权重。问题是,当我用从检查点重新转换过的原始模型覆盖albert_base的原始save_model时,产生的嵌入会有所不同。

当我使用重新转换的save_model运行上面的代码时,出现以下警告:

model = load_pretrained_albert()
Fetching ALBERT model: albert_base version: 2
Already  fetched:  albert_base.tar.gz
already unpacked at: .models\albert_base
loader: No value for:[albert_4/embeddings/word_embeddings/embeddings:0], i.e.:[bert/embeddings/word_embeddings] in:[.models\albert_base]
loader: No value for:[albert_4/embeddings/word_embeddings_projector/projector:0], i.e.:[bert/encoder/embedding_hidden_mapping_in/kernel] in:[.models\albert_base]
loader: No value for:[albert_4/embeddings/word_embeddings_projector/bias:0], i.e.:[bert/encoder/embedding_hidden_mapping_in/bias] in:[.models\albert_base]
loader: No value for:[albert_4/embeddings/position_embeddings/embeddings:0], i.e.:[bert/embeddings/position_embeddings] in:[.models\albert_base]
loader: No value for:[albert_4/embeddings/LayerNorm/gamma:0], i.e.:[bert/embeddings/LayerNorm/gamma] in:[.models\albert_base]
loader: No value for:[albert_4/embeddings/LayerNorm/beta:0], i.e.:[bert/embeddings/LayerNorm/beta] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/self/query/kernel:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/self/query/bias:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/self/key/kernel:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/self/key/bias:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/self/value/kernel:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/self/value/bias:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/output/dense/kernel:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/output/dense/bias:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/output/LayerNorm/gamma:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/attention/output/LayerNorm/beta:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/intermediate/kernel:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/intermediate/bias:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/output/dense/kernel:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/output/dense/bias:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/output/LayerNorm/gamma:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma] in:[.models\albert_base]
loader: No value for:[albert_4/encoder/layer_shared/output/LayerNorm/beta:0], i.e.:[bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta] in:[.models\albert_base]
Done loading 0 BERT weights from: .models\albert_base into <bert.model.BertModelLayer object at 0x0000029687449D68> (prefix:albert_4). Count of weights not found in the checkpoint was: [22]. Count of weights with mismatched shape: [0]
Unused weights from saved model:
        module/bert/embeddings/LayerNorm/beta
        module/bert/embeddings/LayerNorm/gamma
        module/bert/embeddings/position_embeddings
        module/bert/embeddings/token_type_embeddings
        module/bert/embeddings/word_embeddings
        module/bert/encoder/embedding_hidden_mapping_in/bias
        module/bert/encoder/embedding_hidden_mapping_in/kernel
        module/bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
        module/bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
        module/bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
        module/bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
        module/bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
        module/bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
        module/bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
        module/bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
        module/bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
        module/bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
        module/bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
        module/bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
        module/bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
        module/bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
        module/bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
        module/bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
        module/bert/pooler/dense/bias
        module/bert/pooler/dense/kernel
        module/cls/predictions/output_bias
        module/cls/predictions/transform/LayerNorm/beta
        module/cls/predictions/transform/LayerNorm/gamma
        module/cls/predictions/transform/dense/bias
        module/cls/predictions/transform/dense/kernel

当使用原始albert_base运行时,警告如下:

model = load_pretrained_albert()
Fetching ALBERT model: albert_base version: 2
Already  fetched:  albert_base.tar.gz
already unpacked at: .models\albert_base
Done loading 22 BERT weights from: .models\albert_base into <bert.model.BertModelLayer object at 0x0000029680196320> (prefix:albert_5). Count of weights not found in the checkpoint was: [0]. Count of weights with mismatched shape: [0]
Unused weights from saved model:
        bert/embeddings/token_type_embeddings
        bert/pooler/dense/bias
        bert/pooler/dense/kernel
        cls/predictions/output_bias
        cls/predictions/transform/LayerNorm/beta
        cls/predictions/transform/LayerNorm/gamma
        cls/predictions/transform/dense/bias
        cls/predictions/transform/dense/kernel

根据我的理解,由于名称不同,权重未正确加载。 有没有一种方法可以指定以ckpt格式保存时要保存的名称?我觉得,例如,当保存为ckpt格式时,权重'module / bert / embeddings / LayerNorm / beta'却另存为'bert / embeddings / LayerNorm / beta',问题将得到解决。如何摆脱“模块/”零件?

我觉得我可能已经使问题听起来更复杂了,但是我试图尽我所能地具体解释自己的处境,以防万一:)

1 个答案:

答案 0 :(得分:0)

问题已解决!因此,问题实际上是张量名称的差异。 因此,我使用以下代码(https://gist.github.com/batzner/7c24802dd9c5e15870b4b56e22135c96)更改了检查点中的张量名称。

只需要将'module / bert / ....'更改为'bert / ....'就可以了。