tensoflow中的多重编码(谷歌云机器学习,tf估计api)

时间:2017-10-10 20:07:34

标签: machine-learning tensorflow deep-learning google-cloud-platform google-cloud-ml-engine

我有一个像贴标签这样的功能。因此,对于每次观察,post_tag功能可能是一系列标签,如“oscars,brad-pitt,awards”。我希望能够使用在谷歌云机器学习上运行的估算器api将此作为一个功能传递给tensorflow模型构建(根据this example,但适应我自己的问题)。

我只是不确定如何将其转换为tensorflow中的多热编码功能。我想在sklearn中获得与MultiLabelBinarizer类似的东西。

我认为this有点相关,但并不完全符合我的需要。

所以说我有以下数据:

id,post_tag
1,[oscars,brad-pitt,awards]
2,[oscars,film,reviews]
3,[matt-damon,bourne]

我希望将它作为tensorflow中预处理的一部分进行特色化,如下:

id,post_tag_oscars,post_tag_brad_pitt,post_tag_awards,post_tag_film,post_tag_reviews,post_tag_matt_damon,post_tag_bourne
1,1,1,1,0,0,0,0
2,1,0,0,1,1,0,0
3,0,0,0,0,0,1,1

更新

如果我的post_tag_list是输入csv中的“oscars,brad-pitt,awards”之类的字符串。如果我尝试那么做:

INPUT_COLUMNS = [
...
tf.contrib.lookup.HashTable(tf.contrib.lookup.KeyValueTensorInitializer('post_tag_list',
                                            tf.range(0, 10, dtype=tf.int64),
                                            tf.string, tf.int64),
                           default_value=10, name='post_tag_list'),
...]

我收到此错误:

Traceback (most recent call last):
  File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/home/andrew_maguire/localDev/codeBase/pmc-analytical-data-mart/clickmodel/trainer/task.py", line 4, in <module>
    import model
  File "trainer/model.py", line 49, in <module>
    default_value=10, name='post_tag_list'),
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 276, in __init__
    super(HashTable, self).__init__(table_ref, default_value, initializer)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 162, in __init__
    self._init = initializer.initialize(self)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 348, in initialize
    table.table_ref, self._keys, self._values, name=scope)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_lookup_ops.py", line 205, in _initialize_table_v2
    values=values, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2632, in create_op
    set_shapes_for_outputs(ret)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1911, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1861, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 595, in call_cpp_shape_fn
    require_shape_fn)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 659, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Shape must be rank 1 but is rank 0 for 'key_value_init' (op: 'InitializeTableV2') with input shapes: [], [], [10].

如果我要将每个post_tag_list填充为“oscars,brad-pitt,awards,OTHER,OTHER,OTHER,OTHER,OTHER,OTHER,OTHER”,那么它总是长达10个。这会是一个潜在的解决方案。

或者我是否需要以某种方式知道我可能在这里传递的所有帖子标签的大小(有点被定义为一直创建的新标签)。

2 个答案:

答案 0 :(得分:1)

您是否尝试过tf.contrib.lookup.Hashtable?

以下是我自己使用的示例用法:https://github.com/TensorLab/tensorfx/blob/master/src/data/_transforms.py#L160以及基于此的示例摘录:

import tensorflow as tf
session = tf.InteractiveSession()

entries = ['red', 'blue', 'green']
table = tf.contrib.lookup.HashTable(
    tf.contrib.lookup.KeyValueTensorInitializer(entries,
                                                tf.range(0, len(entries), dtype=tf.int64),
                                                tf.string, tf.int64),
    default_value=len(entries), name='entries')
tf.tables_initializer().run()

value = tf.constant([['blue', 'red'], ['green', 'red']])
print(table.lookup(value).eval())

我相信查找适用于常规张量和SparseTensors(考虑到您的可变长度值列表,您可能最终得到后者)。

答案 1 :(得分:1)

这里有几个问题需要解决。首先,是关于标记集不断增长的问题。您还想知道如何解析CSV中的可变长度数据。

要处理不断增长的标记集,您需要使用OOV或特征散列。 Nikhil展示了后者,所以我将展示前者。

如何解析CSV

中的可变长度数据

假设具有可变长度数据的列使用|作为分隔符,例如

csv = [
  "1,oscars|brad-pitt|awards",
  "2,oscars|film|reviews",
  "3,matt-damon|bourne",
]

您可以使用此类代码将这些代码转换为SparseTensor

import tensorflow as tf

# Purposefully omitting "bourne" to demonstrate OOV mappings.
TAG_SET = ["oscars", "brad-pitt", "awards", "film", "reviews", "matt-damon"]
NUM_OOV = 1

def sparse_from_csv(csv):
  ids, post_tags_str = tf.decode_csv(csv, [[-1], [""]])
  table = tf.contrib.lookup.index_table_from_tensor(
      mapping=TAG_SET, num_oov_buckets=NUM_OOV, default_value=-1)
  split_tags = tf.string_split(post_tags_str, "|")
  return ids, tf.SparseTensor(
      indices=split_tags.indices,
      values=table.lookup(split_tags.values),
      dense_shape=split_tags.dense_shape)

# Optionally create an embedding for this.
TAG_EMBEDDING_DIM = 3

ids, tags = sparse_from_csv(csv)

embedding_params = tf.Variable(tf.truncated_normal([len(TAG_SET) + NUM_OOV, TAG_EMBEDDING_DIM]))
embedded_tags = tf.nn.embedding_lookup_sparse(embedding_params, sp_ids=tags, sp_weights=None)

# Test it out
with tf.Session() as s:
  s.run([tf.global_variables_initializer(), tf.tables_initializer()])
  print(s.run([ids, embedded_tags]))

你会看到这样的输出(因为嵌入是随机的,确切的数字会改变):

[array([1, 2, 3], dtype=int32), array([[ 0.16852427,  0.26074541, -0.4237918 ],
       [-0.38550434,  0.32314634,  0.858069  ],
       [ 0.19339906, -0.24429649, -0.08393878]], dtype=float32)]

您可以看到CSV中的每一列都表示为ndarray,其中标签现在是三维嵌入。