微调本地保存的通用语句编码器-'InvalidArgumentError:不成功的TensorSliceReader构造函数'

时间:2019-11-01 14:05:23

标签: python tensorflow transfer-learning tensorflow-hub finetunning

我正在尝试微调已保存的本地通用句子嵌入大v.3。
计划是采用tf-hub模块,用简单的分类器包装,训练模型,并以更新的权重保存USE模块以用于嵌入。

但是从本地文件夹加载模块进行重新训练后,我收到错误消息,它无法从临时文件夹“ C:\ Users \ xxx \ AppData \ Local \ Temp \ 1 \ tfhub_modules ....”获取文件:
InvalidArgumentError: Unsuccessful TensorSliceReader constructor: Failed to get matching files on C:\Users\xxx\AppData\Local\Temp\1\tfhub_modules\96e8f1d3d4d90ce86b2db128249eb8143a91db73\variables\variables: Not found: FindFirstFile failed for: C:/Users/xxx/AppData/Local/Temp/1/tfhub_modules/96e8f1d3d4d90ce86b2db128249eb8143a91db73/variables : The system cannot find the path specified. ; No such process [[{{node checkpoint_initializer_25}}]]

保存模型:
with tf.Session(graph=tf.Graph()) as sess:
    module = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-large/3", trainable=True)
    text_input = tf.placeholder(dtype=tf.string, shape=[None])
    sess.run([tf.global_variables_initializer(), tf.tables_initializer()])
    embeddings = module(text_input)
    simple_save(sess,
        export_dir,
        inputs={'text': text_input},
        outputs={'embeddings': embeddings},
        legacy_init_op=tf.tables_initializer()) 
微调:
 text = ["cat", "kitten", "dog", "puppy"]
label = [0, 0, 1, 1]

graph=tf.Graph()
with tf.Session(graph=graph) as sess:
    sess.run([tf.global_variables_initializer(), tf.tables_initializer()])

    model = tf.saved_model.loader.load(export_dir=**_export_dir,_** sess=sess,
                                       tags=[tag_constants.SERVING]) 

    # universal sentence encoder input/output
    input_tensor_name = model.signature_def[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY].inputs['text'].name
    in_tensor = tf.get_default_graph().get_tensor_by_name(input_tensor_name)

    embedd_tensor_name = model.signature_def[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY].outputs[
        'embeddings'].name
    out_tensor = tf.get_default_graph().get_tensor_by_name(embedd_tensor_name)

    # simple classification on top of use
    input_y = tf.placeholder(tf.int32, shape=(None))
    labels = tf.one_hot(input_y, 4)
    logits = tf.layers.dense(out_tensor, 4)
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)
    optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)

    sess.run(tf.global_variables_initializer())

    for epoch in range(2):
        feed_dict = {
            in_tensor: text,
            input_y: label
        }
        sess.run(optimizer, feed_dict)

    message_embeddings = sess.run(out_tensor, feed_dict={in_tensor: text})
    print("Embeddings after tuning:", message_embeddings)`

所以目前我怀疑变量初始化有问题。但是不确定正确的方法。

非常感谢您在此方面的任何帮助。 谢谢。

1 个答案:

答案 0 :(得分:0)

我已通过仅初始化优化程序变量来解决此问题。 sess.run(tf.global_variables_initializer())

       uninitialized_vars = []
        for var in tf.all_variables():
            try:
                sess.run(var)
            except tf.errors.FailedPreconditionError:
                uninitialized_vars.append(var)
        sess.run(tf.initialize_variables(uninitialized_vars))

但是面临另一个问题-经过培训,我保存了微调的嵌入模型,但是大小变为2.3Gb,而不是原始的800Mb。据我了解,这是由于优化程序的变量所致。