我正在尝试在Tensorflow 2.0RC中实现自定义Keras Layer
,并且需要将[None, Q]
形张量连接到[None, H, W, D]
形张量上以产生[None, H, W, D + Q]
形张量。假定两个输入张量具有相同的批量大小,即使事先未知。同样,H,W,D和Q在写入时都不是已知的,但是在首次调用该层时,会在该层的build
方法中对其进行评估。我遇到的问题是将[None, Q]
形张量传播到[None, H, W, Q]
形张量以进行串联时。
以下是尝试使用功能性API创建Keras Model
的示例,该API执行从形状[None, 3]
到形状[None, 5, 5, 3]
的可变批量广播:
import tensorflow as tf
import tensorflow.keras.layers as kl
import numpy as np
x = tf.keras.Input([3]) # Shape [None, 3]
y = kl.Reshape([1, 1, 3])(x) # Need to add empty dims before broadcasting
y = tf.broadcast_to(y, [-1, 5, 5, 3]) # Broadcast to shape [None, 5, 5, 3]
model = tf.keras.Model(inputs=x, outputs=y)
print(model(np.random.random(size=(8, 3))).shape)
Tensorflow产生错误:
InvalidArgumentError: Dimension -1 must be >= 0
然后,当我将-1
更改为None
时,它会给我:
TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [None, 5, 5, 3]. Consider casting elements to a supported type.
如何执行指定的广播?
答案 0 :(得分:0)
您需要使用y
的动态形状来确定批量大小。张量y
的动态形状由tf.shape(y)
给出,它是一个张量op,表示在运行时评估的y
的形状。修改后的示例通过使用tf.where
在旧形状[None, 1, 1, 3]
和新形状之间进行选择来说明这一点。
import tensorflow as tf
import tensorflow.keras.layers as kl
import numpy as np
x = tf.keras.Input([3]) # Shape [None, 3]
y = kl.Reshape([1, 1, 3])(x) # Need to add empty dims before broadcasting
# Retain the batch and depth dimensions, but broadcast along H and W
broadcast_shape = tf.where([True, False, False, True],
tf.shape(y), [0, 5, 5, 0])
y = tf.broadcast_to(y, broadcast_shape) # Broadcast to shape [None, 5, 5, 3]
model = tf.keras.Model(inputs=x, outputs=y)
print(model(np.random.random(size=(8, 3))).shape)
# prints: "(8, 5, 5, 3)"
参考文献: