Tensorflow tf.constant_initializer非常慢

时间:2017-06-04 11:20:57

标签: python tensorflow lstm word2vec

尝试使用100 dim的预训练word2vec嵌入来训练LSTM

@staticmethod
def load_embeddings(pre_trained_embeddings_path, word_embed_size):
    embd = []
    import time
    start_time = time.time()
    cnt = 4
    with codecs.open(pre_trained_embeddings_path, mode="r", encoding='utf-8') as f:
        for line in f.readlines():
            values = line.strip().split(' ')
            embd.append(values[1:])
            cnt += 1
            if cnt % 100000 == 0:
                print("word-vectors loaded: %d" % cnt)

    embedding, vocab_size, embed_dim = embd, len(embd), len(embd[0])

    load_end_time = time.time()
    print("word vectors loaded from and start initialising, cnt: %d, time taken: %d secs " % (vocab_size, load_end_time - start_time))

    embedding_init = tf.constant_initializer(embedding, dtype=tf.float16)
    src_word_embedding = tf.get_variable(shape=[vocab_size, embed_dim], initializer=embedding_init, trainable=False, name='word_embedding', dtype=tf.float16)

    print("word-vectors loaded and initialised, cnt: %d, time taken: %d secs" % (vocab_size, time.time() - load_end_time))

    return src_word_embedding

运行此方法时的输出如下:

word vectors loaded from and start initialising, cnt: 2419080, time taken: 74 secs
word-vectors loaded and initialised, cnt: 2419080, time taken: 1647 secs

系统信息:tensorflow 1.1.0, tcmalloc, python 3.6, ubuntu 14.04

HALF初始化一小时似乎很慢或是正常行为?知道可能是什么问题还是有问题?

更新:使用@sirfz提供嵌入的方法,加载嵌入Initialization Done in 85 secs

非常快

2 个答案:

答案 0 :(得分:0)

将大型常量加载到图形中不仅速度较慢,而且还会泄漏大量内存。我有一个类似的问题,I reported not long ago和我最好的解决方法是:

# placeholder for loading your saved embeddings
embedding_init = tf.placeholder(tf.float16, shape=[vocab_size, embed_dim])
src_word_embedding = tf.get_variable(initializer=embedding_init, trainable=False, name='word_embedding', dtype=tf.float16)

# run initialization with the value of embeddings placeholder
session.run(tf.global_variables_initializer(), feed_dict={embedding_init: embedding})

答案 1 :(得分:0)

我不知道这是否是一种预期的行为,但我可以解释为什么它可以放慢一个小例子:

import tensorflow as tf

x = [[0, 1], [2, 3]]
a = tf.constant(x, name='a')
b = tf.Variable(x, name='b')
c = a + b

with tf.Session() as sess:
    writer = tf.summary.FileWriter('logs', sess.graph)
    writer.close()

初始化常量时,此常量的值将添加到图形中。如果您打开图表,可以通过单击a值来查看它。

enter image description here

在我的情况下它是一个2x2矩阵,但看起来在你的情况下它是2M x?矩阵,这是巨大的。因此,在我看来,这是执行缓慢的原因。

尝试将其初始化为变量并将其嵌入到那里。