我正在玩map_fn函数,并注意到它输出了一个TensorArray,这意味着它能够输出"锯齿状的"张量(内部的张量具有不同的第一维度。
我尝试使用此代码查看此操作:
import tensorflow as tf
import numpy as np
NUM_ARRAYS = 1000
MAX_LENGTH = 1000
lengths = tf.placeholder(tf.int32)
tArray = tf.map_fn(lambda x: tf.random_normal((x,), 0, 1),
lengths,
dtype=tf.float32) # Should return a TensorArray.
# startTensor = tf.random_normal((tf.reduce_sum(lengths),), 0, 1)
# tArray = tf.TensorArray(tf.float32, NUM_ARRAYS)
# tArray = tArray.split(startTensor, lengths)
# outArray = tArray.concat()
with tf.Session() as sess:
outputArray, l = sess.run(
[tArray, lengths],
feed_dict={lengths: np.random.randint(MAX_LENGTH, size=NUM_ARRAYS)})
print outputArray.shape, l
然而得到了错误:
" TensorArray形状不一致。索引0具有形状:[259]但索引1具有形状:[773]"
这当然让我感到惊讶,因为我觉得TensorArrays应该能够处理它。我错了吗?
答案 0 :(得分:7)
虽然tf.map_fn()
确实在内部使用tf.TensorArray
个对象 ,而tf.TensorArray
可以容纳不同大小的对象,但此程序无法按原样运行因为tf.map_fn()
通过将元素堆叠在一起将其tf.TensorArray
结果转换回tf.Tensor
,并且此操作失败。
然而,您可以使用低级tf.while_loop()
操作来实现基于tf.TensorArray
的操作:
lengths = tf.placeholder(tf.int32)
num_elems = tf.shape(lengths)[0]
init_array = tf.TensorArray(tf.float32, size=num_elems)
def loop_body(i, ta):
return i + 1, ta.write(i, tf.random_normal((lengths[i],), 0, 1))
_, result_array = tf.while_loop(
lambda i, ta: i < num_elems, loop_body, [0, init_array])
答案 1 :(得分:1)
根据mrry的回答,可以在TF2.x下运行更多示例。
import tensorflow as tf
# ================= example 1 ==================
num_elems = 5
init_array = tf.TensorArray(tf.float32, size=num_elems, infer_shape=False)
lengths = tf.range(0, 5)
def loop_body(i, ta):
return i + 1, ta.write(i, tf.random.normal((lengths[i],), 0, 1))
_, result_array = tf.while_loop(
lambda i, ta: i < num_elems, loop_body, [0, init_array])
for i in range(num_elems):
print(result_array.read(i))
# ================== example 2 ==================
# TensorArray whose size is known at run time and shapes of elements
# are not necessarily the same
ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, infer_shape=False)
# init ta with some mock data
ta = ta.write(0, 0.0)
ta = ta.write(1, 1.0)
ta = ta.write(2, tf.constant([2.0, 2.0]))
# loop body
def loop_body(i, t):
val = t.read(i)
# do something
t = t.write(i, tf.multiply(2.0, val))
return i+1, t
# stop condition for while loop
index = tf.constant(0)
cond = lambda i, t: tf.less(i, t.size())
# results
i = tf.constant(0)
_, result_array = tf.while_loop(cond, loop_body, [i, ta])
for i in range(result_array.size()):
print(result_array.read(i))