tf.while_loop每次迭代都有灵活的行号

时间:2018-04-29 14:52:44

标签: tensorflow vector tensor

我正在尝试填充tf.while_loop中的二维数组。事情是我在每次迭代计算的结果返回一个可变数量的行。 Tensorflow似乎不允许这样做。

请参阅此重现问题的最小示例:

indices = tf.constant([2, 5, 7, 9])

num_elems = tf.shape(indices)[0]
init_array = tf.TensorArray(tf.float64, size=num_elems)
initial_i = tf.constant(0, dtype='int32')

def loop_body(i, ta):
    # Here if I choose a random rows number, it fails.
    n_rows = tf.random_uniform((), minval=0, maxval=10, dtype=tf.int64)

    # It works with a fixed row number.
    # n_rows = 2

    anchor = tf.random_normal((n_rows, 4))
    ta = ta.write(i, tf.cast(anchor, tf.float64))
    return i+1, ta

_, anchors= tf.while_loop(lambda i, ta: i < num_elems, loop_body, [initial_i, init_array])
anchors = anchors.stack()
anchors = tf.reshape(anchors, shape=(-1, 4))
anchors = tf.identity(anchors, name="anchors")

with tf.Session() as sess:
    result = sess.run(anchors)
    print(result)

它返回:

[[ 0.07496446 -0.32444516 -0.47164568  1.10953283]
 [-0.78791034  1.87736523  0.99817699  0.45336106]
 [-0.65860498 -1.1703862  -0.05761402 -0.17642537]
 [ 0.49713874  1.01805222  0.60902107  0.85543454]
 [-1.38755643 -0.70669901  0.34549037 -0.85984546]
 [-1.32419562  0.71003789  0.34984082 -1.39001906]
 [ 2.26691341 -0.63561141  0.38636214  0.02521387]
 [-1.55348766  1.0176425   0.4889268  -0.12093868]]

我也愿意接受替代解决方案,在每次迭代时用一个可变行数填充Tensor。

1 个答案:

答案 0 :(得分:1)

这是一个嵌套while_loop解决方案,可写入单个TensorArray

import tensorflow as tf

def make_inner_loop_body(total_size, anchor):

  def _inner_loop_body(j, ta):
    return j + 1, ta.write(total_size + j, anchor[j])

  return _inner_loop_body

def loop_body(i, total_size, ta):
    n_rows = tf.random_uniform((), minval=0, maxval=10, dtype=tf.int32)
    n_rows = tf.Print(n_rows, [n_rows])
    anchor = tf.random_normal((n_rows, 4), dtype=tf.float64)
    _, ta = tf.while_loop(lambda j, ta: j < n_rows,
                          make_inner_loop_body(total_size, anchor),
                          (tf.zeros([], dtype=tf.int32), ta))
    return i+1, total_size + n_rows, ta

_, _, anchors= tf.while_loop(lambda i, total_size, ta: i < 4,
                             loop_body,
                             (tf.zeros([], dtype=tf.int32),
                              tf.zeros([], dtype=tf.int32),
                              tf.TensorArray(tf.float64, size=0,
                                             dynamic_size=True)))
anchors = anchors.stack()
anchors = tf.reshape(anchors, shape=(-1, 4))
anchors = tf.identity(anchors, name="anchors")

with tf.Session() as sess:
    result = sess.run(anchors)
    print("Final shape", result.shape)
    print(result)

这打印如下:

[5]
[5]
[7]
[7]
Final shape (24, 4)

我假设random_normal需要在while_loop处理import tensorflow as tf n_rows = tf.random_uniform((4,), minval=0, maxval=10, dtype=tf.int32) anchors = tf.random_normal((tf.reduce_sum(n_rows), 4), dtype=tf.float64) with tf.Session() as sess: result = sess.run(anchors) print("Final shape", result.shape) print(result) 。否则,因为它更容易编写:

var g1 = [2,0,0,0,0]
var g2 = [2,0,0,0,0]
var g3 = [2,0,0,0,0]
var g4 = [3,0,0,0,0]
var g5 = [3,0,0,0,0]
var g6 = [4,0,0,0,0]
var g7 = [5,0,0,0,0]

let groids: Array = [g1, g2, g3, g4, g5, g6, g7]