我想实现具有不固定输入大小的Generative对抗网络(GAN),例如4-D Tensor (Batch_size, None, None, 3)
。
但是当我使用conv2d_transpose时,有一个参数output_shape
,此参数必须在解卷积操作后传递true size
。
例如,如果the size of batch_img is (64, 32, 32, 128), w is weight with (3, 3, 64, 128)
,则
deconv = tf.nn.conv2d_transpose(batch_img, w, output_shape=[64, 64, 64, 64],stride=[1,2,2,1], padding='SAME')
所以,我deconv
获得了size (64, 64, 64, 64)
,如果我通过了true size of output_shape
,那就没问题了。
但是,我想使用不固定的输入大小(64, None, None, 128)
,并使用deconv
获取(64, None, None, 64)
。
但是,它会引发如下错误。
TypeError: Failed to convert object of type <type'list'> to Tensor...
那么,我该怎么做才能在deconv中避免这个参数?还是有另一种方法来实现不固定的GAN?
答案 0 :(得分:2)
acts_as_notifiable :users,
#Notification targets as :targets
# Set to notify to author and users commented to the article, except comment owner self
targets: ->(mycase, key) {
([mycase.attorney_user] + [mycase.user]+[mycase.client]).uniq
},
tf.placeholder
,请尝试(64, None, None, 128)
...我不是确切这是否有效...这对我来说对于我的第一个参数不是固定大小的batch_size,所以我使用 -1 [64, -1, -1, 128]
tf.layers.conv2d_transpose()
对您有用,因为它需要不同输入的张量tf.layers.conv2d_transpose()
,只需指定output-shape
和output_channel
即可使用答案 1 :(得分:0)
我也遇到了这个问题。如此处其他答案所示,使用-1不起作用。相反,您必须掌握传入张量的形状并构造output_size
参数。这是我编写的测试的摘录。在这种情况下,这是未知的第一个维度,但它适用于已知参数和未知参数的任何组合。
output_shape = [8, 8, 4] # width, height, channels-out. Handle batch size later
xin = tf.placeholder(dtype=tf.float32, shape = (None, 4, 4, 2), name='input')
filt = tf.placeholder(dtype=tf.float32, shape = filter_shape, name='filter')
## Find the batch size of the input tensor and add it to the front
## of output_shape
dimxin = tf.shape(xin)
ncase = dimxin[0:1]
oshp = tf.concat([ncase,output_shape], axis=0)
z1 = tf.nn.conv2d_transpose(xin, filt, oshp, strides=[1,2,2,1], name='xpose_conv')
答案 2 :(得分:0)
我找到了一种解决方案,将tf.shape用于未指定的形状,将get_shape()用于指定的形状。
def get_deconv_lens(H, k, d):
return tf.multiply(H, d) + k - 1
def deconv2d(x, output_shape, k_h=2, k_w=2, d_h=2, d_w=2, stddev=0.02, name='deconv2d'):
# output_shape: the output_shape of deconv op
shape = tf.shape(x)
H, W = shape[1], shape[2]
N, _, _, C = x.get_shape().as_list()
H1 = get_deconv_lens(H, k_h, d_h)
W1 = get_deconv_lens(W, k_w, d_w)
with tf.variable_scope(name):
w = tf.get_variable('weights', [k_h, k_w, C, x.get_shape()[-1]], initializer=tf.random_normal_initializer(stddev=stddev))
biases = tf.get_variable('biases', shape=[C], initializer=tf.zeros_initializer())
deconv = tf.nn.conv2d_transpose(x, w, output_shape=[N, H1, W1, C], strides=[1, d_h, d_w, 1], padding='VALID')
deconv = tf.nn.bias_add(deconv, biases)
return deconv