我要删除一个亚像素卷积,然后出现错误

时间:2018-07-12 05:36:26

标签: python tensorflow conv-neural-network tensorlayer

我正在使用一个名为SRGAN的源代码文件。这将放大4倍的高档照片。 https://github.com/tensorlayer/srgan

但是,我想将照片放大2倍。开发人员要求删除一个子像素作为一种方法。 https://github.com/tensorlayer/srgan/issues/20

所以我遵循了他给的方式。

def SRGAN_g(t_image, is_train=False, reuse=False):
""" Generator in Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
feature maps (n) and stride (s) feature maps (n) and stride (s)
"""
w_init = tf.random_normal_initializer(stddev=0.02)
b_init = None  # tf.constant_initializer(value=0.0)
g_init = tf.random_normal_initializer(1., 0.02)
with tf.variable_scope("SRGAN_g", reuse=reuse) as vs:
    # tl.layers.set_name_reuse(reuse) # remove for TL 1.8.0+
    n = InputLayer(t_image, name='in')
    n = Conv2d(n, 64, (3, 3), (1, 1), act=tf.nn.relu, padding='SAME', W_init=w_init, name='n64s1/c')
    temp = n

    # B residual blocks
    for i in range(16):
        nn = Conv2d(n, 64, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, b_init=b_init, name='n64s1/c1/%s' % i)
        nn = BatchNormLayer(nn, act=tf.nn.relu, is_train=is_train, gamma_init=g_init, name='n64s1/b1/%s' % i)
        nn = Conv2d(nn, 64, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, b_init=b_init, name='n64s1/c2/%s' % i)
        nn = BatchNormLayer(nn, is_train=is_train, gamma_init=g_init, name='n64s1/b2/%s' % i)
        nn = ElementwiseLayer([n, nn], tf.add, name='b_residual_add/%s' % i)
        n = nn

    n = Conv2d(n, 64, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, b_init=b_init, name='n64s1/c/m')
    n = BatchNormLayer(n, is_train=is_train, gamma_init=g_init, name='n64s1/b/m')
    n = ElementwiseLayer([n, temp], tf.add, name='add3')
    # B residual blacks end

    n = Conv2d(n, 256, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, name='n256s1/1')
    n = SubpixelConv2d(n, scale=2, n_out_channel=None, act=tf.nn.relu, name='pixelshufflerx2/1')

    n = Conv2d(n, 256, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, name='n256s1/2')
    n = SubpixelConv2d(n, scale=2, n_out_channel=None, act=tf.nn.relu, name='pixelshufflerx2/2')

    n = Conv2d(n, 3, (1, 1), (1, 1), act=tf.nn.tanh, padding='SAME', W_init=w_init, name='out')
    return n

这是原始源代码。和

def SRGAN_g(t_image, is_train=False, reuse=False):
""" Generator in Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
feature maps (n) and stride (s) feature maps (n) and stride (s)
"""
w_init = tf.random_normal_initializer(stddev=0.02)
b_init = None  # tf.constant_initializer(value=0.0)
g_init = tf.random_normal_initializer(1., 0.02)
with tf.variable_scope("SRGAN_g", reuse=reuse) as vs:
    # tl.layers.set_name_reuse(reuse) # remove for TL 1.8.0+
    n = InputLayer(t_image, name='in')
    n = Conv2d(n, 64, (3, 3), (1, 1), act=tf.nn.relu, padding='SAME', W_init=w_init, name='n64s1/c')
    temp = n

    # B residual blocks
    for i in range(16):
        nn = Conv2d(n, 64, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, b_init=b_init, name='n64s1/c1/%s' % i)
        nn = BatchNormLayer(nn, act=tf.nn.relu, is_train=is_train, gamma_init=g_init, name='n64s1/b1/%s' % i)
        nn = Conv2d(nn, 64, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, b_init=b_init, name='n64s1/c2/%s' % i)
        nn = BatchNormLayer(nn, is_train=is_train, gamma_init=g_init, name='n64s1/b2/%s' % i)
        nn = ElementwiseLayer([n, nn], tf.add, name='b_residual_add/%s' % i)
        n = nn

    n = Conv2d(n, 64, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, b_init=b_init, name='n64s1/c/m')
    n = BatchNormLayer(n, is_train=is_train, gamma_init=g_init, name='n64s1/b/m')
    n = ElementwiseLayer([n, temp], tf.add, name='add3')
    # B residual blacks end

    n = Conv2d(n, 256, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, name='n256s1/1')
    n = SubpixelConv2d(n, scale=2, n_out_channel=None, act=tf.nn.relu, name='pixelshufflerx2/1')

    n = Conv2d(n, 256, (3, 3), (1, 1), act=None, padding='SAME', W_init=w_init, name='n256s1/2')

    n = Conv2d(n, 3, (1, 1), (1, 1), act=tf.nn.tanh, padding='SAME', W_init=w_init, name='out')
    return n

正如他所说,我删除了一个亚像素块。然后发生以下错误:

ValueError: Dimension 2 in both shapes must be equal, but are 256 and 64. Shapes are [1,1,256,3] and [1,1,64,3]. for 'Assign_171' (op: 'Assign') with input shapes: [1,1,256,3], [1,1,64,3].

如何解决此错误?

1 个答案:

答案 0 :(得分:1)

问题是您正在(可能)尝试使用旧网络(x4)中的参数初始化新网络(x2)。这不起作用,因为作为错误状态,尺寸不兼容。

应该与引起错误的这一行相似。

tl.files.load_and_assign_npz(sess=sess, name='model.npz', network=network)

要解决此问题,您必须使用新的比例因子重新训练网络,并使用新模型进行评估。