将快速文本模型转换为张量流集线器时发生错误

时间:2019-01-16 08:08:30

标签: tensorflow tensorflow-hub

我正在尝试将facebook的快速文本模型转换为tensorflow-hub格式。我已为此目的附加了两个主要文件。

def _compute_ngrams(word, min_n=1, max_n=3):
    BOW, EOW = ('<', '>')  # Used by FastText to attach to all words as prefix and suffix
    ngrams = [] # batch_size, n_words, maxlen
    shape = word.shape # batch_size, n_sentenes, n_words
    maxlen = 0
    for b in range(shape[0]): # batch
        ngram_b = []
        for w in word[b]: 
            ngram = []
            extended_word = BOW + "".join( chr(x) for x in bytearray(w)) + EOW
            if w.decode("utf-8") not in global_vocab:
                for ngram_length in range(min_n, min(len(extended_word), max_n) + 1):
                    for i in range(0, len(extended_word) - ngram_length + 1):
                        ngram.append(extended_word[i:i + ngram_length])
            else:
                ngram.append(w.decode("utf-8") )
            ngram_b.append(ngram)
            maxlen = max(maxlen, len(ngram))
        ngrams.append(ngram_b)
    for batches in ngrams:
        for words in batches:
            temp = maxlen
            r = []
            while temp > len(words):
                r.append("UNK")
                temp = temp - 1
            words.extend(r)
    return ngrams

def make_module_spec(vocabulary_file, vocab_size, embeddings_dim=300,
                     num_oov_buckets=1):



     def module_fn():
        """Spec function for a token embedding module."""
        words = tf.placeholder(shape=[None, None], dtype=tf.string, name="tokens")
        tokens = tf.py_func(_compute_ngrams, [words], tf.string)
        embeddings_var = tf.get_variable(
            initializer=tf.zeros([vocab_size + num_oov_buckets, embeddings_dim]),
            name=EMBEDDINGS_VAR_NAME,
            dtype=tf.float32
        )

        lookup_table = tf.contrib.lookup.index_table_from_file(
            vocabulary_file=vocabulary_file,
            num_oov_buckets=num_oov_buckets,
        )
        ids = lookup_table.lookup(tokens)
        #combined_embedding = tf.reduce_mean(tf.nn.embedding_lookup(params=embeddings_var, ids=ids), axis=2)
        combined_embedding = tf.nn.embedding_lookup(params=embeddings_var, ids=ids)
        hub.add_signature("default", {"tokens": words},
                          {"default": combined_embedding})
    return hub.create_module_spec(module_fn)

使用tf-hub格式按预期创建模型。

但是当我尝试使用上面创建的模型时,出现此错误。

下面附有使用上面创建的tf-hub模型的示例测试代码。

with tf.Graph().as_default():
  module_url = "/home/sahil_wadhwa/tf-hub/tf_sent"
  embed = hub.Module(module_url)
  embeddings = embed([["Indian", "American"], ["Hello", "World"]])

  with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    sess.run(tf.tables_initializer())
    result = sess.run(embeddings)
    print(result)
    print(result.shape)

我得到的错误在这里。

Traceback (most recent call last):

  File "/home/sahil_wadhwa/.local/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 195, in __call__
    raise ValueError("callback %s is not found" % token)

ValueError: callback pyfunc_0 is not found


         [[{{node module_apply_default/PyFunc}} = PyFunc[Tin=[DT_STRING], Tout=[DT_STRING], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/device:CPU:0"](Const)]]

长期坚持使用此功能,这里的任何帮助都是有用的。

谢谢。

1 个答案:

答案 0 :(得分:2)

https://github.com/tensorflow/hub/issues/222上回答:

萨希尔,

这里的问题是tf.py_func无法序列化。序列化 不支持任意Python函数(出于多种原因)。

我看到您正在从令牌中创建ngram(如果词汇表中不存在) (顺便说一句,实际上是在FastText词汇表中要查找的ngram还是 它只包含完整的单词吗?)。

解决此问题的一种方法可能是重写_compute_ngrams函数 在TensorFlow中(也许您可以直接使用它或至少获得一些 灵感: https://www.tensorflow.org/tfx/transform/api_docs/python/tft/ngrams