在top_k输出中使用scatter_nd

时间:2018-10-25 19:11:20

标签: python tensorflow

我一直在尝试做看似简单的事情,但没有成功。 我有一个(?,4)张量,其中每一行将是01之间的4个浮点数。 我想用一个新的张量代替它,其中每行在其他任何地方都只有前2个条目和零。

带有(2, 4)的示例:

source = [ [0.1, 0.2, 0.5, 0.6],
           [0.8, 0.7, 0.2, 0.1] ]

result = [ [0.0, 0.0, 0.5, 0.6],
           [0.8, 0.7, 0.0, 0.0] ]

我尝试在源代码上使用 top_k ,然后将 scatter_nd 与top_k返回的索引一起使用,但实际上是4个小时形状不匹配以及scatter_nd中的排名错误

我准备放弃,但我想我会先在这里寻求帮助。 我在这里发现了两个密切相关的问题,但我未能针对我的情况归纳其中的信息。

我刚刚尝试过的另一种方法是:

tensor = tf.constant( [ [0.1, 0.2, 0.8], [0.1, 0.2, 0.7] ])
values, indices = tf.nn.top_k(tensor, 1)
elems = (tensor, values)
masked_a = tf.map_fn( 
           lambda a : tf.where( tf.greater_equal(a[0], a[1]), a[0], 
           tf.zeros_like(a[0]) ), 
           elems)

但这会给我以下错误:

ValueError: The two structures don't have the same number of elements.
First structure (2 elements): (tf.float32, tf.float32)
Second structure (1 elements): Tensor("map/while/Select:0", shape=(3,), dtype=float32)

我是TensorFlow的新手,如果我缺少简单的东西或不清楚的地方,我深表歉意。

谢谢!

2 个答案:

答案 0 :(得分:0)

您可以使用tf.scatter_nd将行索引附加到top_k返回的索引上。

import tensorflow as tf

source = tf.constant([
    [0.1, 0.2, 0.5, 0.6],
    [0.8, 0.7, 0.2, 0.1]])

# get indices of top k
k = 2
top_k, top_k_inds = tf.nn.top_k(source, k, )

# indices are only columns, we will stack 
# it so the row indice is also there and
# make tensor of row numbers ie.
# [[0, 0],
#  [1, 1],
#  ...
num_rows = tf.shape(source)[0]
row_range = tf.range(num_rows)
row_tensor = tf.tile(row_range[:,None], (1, k))

# stack along the final dimension, as this is what
# scatter_nd uses as the indices
top_k_row_col_indices = tf.stack([row_tensor, top_k_inds], axis=2)

# to mask off everything, we will multiply the top_k by
# 1. so all the updates are just 1
updates = tf.ones([num_rows, k], dtype=tf.float32)

# build the mask
zero_mask = tf.scatter_nd(top_k_row_col_indices, updates, [num_rows, 4])

with tf.Session() as sess:
    zeroed = source*zero_mask
    print(zeroed.eval())

这应该打印

[[0.  0.  0.5 0.6]
[0.8 0.7 0.  0. ]]

答案 1 :(得分:-1)

只需粘贴几行代码:)

import tensorflow as tf

def attach_indice(tensor, top_k = None):
    flatty = tf.reshape(tensor, [-1])
    orig_shape = tf.shape(tensor)
    length = tf.shape(flatty)[0]
    if top_k is not None:
        orig_shape = orig_shape[:-1] # dim for top_k
        length //= top_k
    indice = tf.unravel_index(tf.range(length), orig_shape)
    indice = tf.transpose(indice)
    if indice.dtype != tensor.dtype:
        indice = tf.cast(indice, tensor.dtype)
    if top_k is not None:
        _dims = len(tensor.shape) - 1 # indice of indice
        shape = [1 for _ in range(_dims)]
        shape[-1] *= top_k
        indice = tf.reshape(tf.tile(indice, shape), [-1, _dims])
    return tf.concat([indice, flatty[:, None]], -1)

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
# tf.enable_eager_execution()

from time import time

top_k = 3
shape = [50, 40, 100]

q = tf.random_uniform(shape)

# fast: 4.376221179962158 (GPU) / 2.483684778213501 (CPU)
v, k = tf.nn.top_k(q, top_k)
k = attach_indice(k, top_k)
s = tf.scatter_nd(k, tf.reshape(v, [-1]), shape)

# very slow: 281.82796931266785 (GPU) / 35.163344860076904 (CPU)
# s = tf.map_fn(lambda v__k__: tf.map_fn(lambda v_k_: tf.scatter_nd(v_k_[1][:, None], v_k_[0], [shape[-1]]), v__k__, q.dtype), tf.nn.top_k(q, top_k), q.dtype)

start = time()
with tf.Session() as sess:
    for _ in range(1000):
        sess.run(s)
print('time', time() - start)