tensorflow conv2d_transpose gradient

时间:2017-03-10 09:53:33

标签: tensorflow deconvolution

我正在尝试使用tensorflow构建反卷积网络。

这是我的代码。

def decoder(self, activations):
    with tf.variable_scope("Decoder") as scope:

        h0 = conv2d(activations, 128, name = "d_h0_conv_1")
        h0 = lrelu(h0)
        shape = activations.get_shape().as_list()
        h0 = deconv2d(h0, [shape[0], 2 * shape[1], 2 * shape[2], 128], name = "d_h0_deconv_1") 
        h0 = lrelu(h0)

        h1 = conv2d(h0, 128, name = "d_h1_conv_1")
        h1 = lrelu(h1)
        h1 = conv2d(h1, 64, name = "d_h1_conv_2")
        h1 = lrelu(h1)
        shape = h1.get_shape().as_list()
        h1 = deconv2d(h1, [shape[0], 2 * shape[1], 2 * shape[2], 64], name = "d_h1_deconv_1") 
        h1 = lrelu(h1)

        h2 = conv2d(h1, 64, name = "d_h2_conv_1")
        h2 = lrelu(h2)
        h2 = conv2d(h2, 3, name = "d_h2_conv_2")

        output = h2
        print shape


    return output

参数激活基本上是从VGG19网络激活的。

这是deconv2d()函数

def deconv2d(input_, output_shape,
         k_h=3, k_w=3, d_h=1, d_w=1, stddev=0.02,
         name="deconv2d", with_w=False):
with tf.variable_scope(name):
    # filter : [height, width, output_channels, in_channels]
    w = tf.get_variable('w', [k_h, k_w, output_shape[-1], input_.get_shape()[-1]],
                        initializer=tf.contrib.layers.variance_scaling_initializer())

    deconv = tf.nn.conv2d_transpose(input_, w, output_shape=output_shape,
                        strides=[1, d_h, d_w, 1], padding='SAME')


    biases = tf.get_variable('biases', [output_shape[-1]], initializer=tf.constant_initializer(0.0))
    deconv = tf.reshape(tf.nn.bias_add(deconv, biases), deconv.get_shape())

    return deconv

这就是损失

with tf.name_scope("total_loss"):
        self.loss = tf.nn.l2_loss(self.output - self.images)

它不会产生输出形状兼容错误。 但是,通过优化,

 with tf.variable_scope("Optimizer"):
        optimizer = tf.train.AdamOptimizer(config.learning_rate)
        grad_and_vars = optimizer.compute_gradients(self.loss, var_list = self.d_vars)
        self.d_optim = optimizer.apply_gradients(grad_and_vars)

张量流产生错误,

Traceback (most recent call last):
File "main.py", line 74, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-   packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 59, in main
dcgan.train(FLAGS)
File "/home/junyonglee/workspace/bi_sim/sumGAN/model.py", line 121, in train
grad_and_vars = optimizer.compute_gradients(self.loss, var_list = self.d_vars)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 354, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 500, in gradients
in_grad.set_shape(t_in.get_shape())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 425, in set_shape
self._shape = self._shape.merge_with(shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 585, in merge_with
(self, other))
ValueError: Shapes (30, 256, 256, 64) and (30, 128, 128, 64) are not compatible

解码器的输出大小为(30,256,256 3),其中30是批量大小。

看起来在图层“d_h1_deconv_1”,全局渐变(进入op单元的渐变流)是(30,256,256,64)的形状,其中局部渐变(输入的渐变)是(30)的形状(30 ,128,128,64),这是非常明显的事实,它正在进行转置卷积。

有没有人知道如何使用conv2d_transpose()正确地进行反推? 谢谢!

1 个答案:

答案 0 :(得分:2)

您能告诉我们您的deconv2d功能吗?没有它,我无法为你提供很多建议。

以下是我实现这种反卷积功能的两种方法:

def transpose_deconvolution_layer(input_tensor,used_weights,new_shape,stride,scope_name):
  with tf.variable_scope(scope_name):
    output = tf.nn.conv2d_transpose(input_tensor, used_weights, output_shape=new_shape,strides=[1,stride,stride,1], padding='SAME')
    output = tf.nn.relu(output)
    return output


def resize_deconvolution_layer(input_tensor,used_weights,new_shape,stride,scope_name):
    with tf.variable_scope(scope_name):
        output = tf.image.resize_images(input_tensor,(new_shape[1],new_shape[2]))#tf.nn.conv2d_transpose(input_tensor, used_weights, output_shape=new_shape,strides=[1,stride,stride,1], padding='SAME')
        output, unused_weights = conv_layer(output,3,new_shape[3]*2,new_shape[3],1,scope_name+"_awesome_deconv")
        return output

请测试这是否有效。 如果您想更多地了解我为什么编程两个,请查看以下文章:http://www.pinchofintelligence.com/photorealistic-neural-network-gameboy/和本文:http://distill.pub/2016/deconv-checkerboard/

请告诉我这是否有帮助!

亲切的问候