构建密集层时,GAN鉴别器中的Tensorflow错误

时间:2018-08-08 18:48:00

标签: python python-3.x tensorflow deep-learning

我在python 3中用tf定义了以下GAN鉴别符。

def discriminator(image):
    h_1 = tf.nn.conv2d(image, tf.Variable(tf.truncated_normal([5, 5, 3, 8])), [1, 2, 2, 1], padding='VALID', name='d_conv1')
    a_1 = tf.nn.relu(h_1)

    h_2 = tf.nn.conv2d(a_1, tf.Variable(tf.truncated_normal([5, 5, 8, 16])), [1, 2, 2, 1], 'VALID', name='d_conv2')
    a_2 = tf.nn.relu(h_2)

    h_3 = tf.nn.conv2d(a_2, tf.Variable(tf.truncated_normal([5, 5, 16, 32])), [1, 2, 2, 1], 'VALID', name='d_conv3')
    a_3 = tf.nn.relu(h_3)

    h_4 = tf.nn.conv2d(a_3, tf.Variable(tf.truncated_normal([5, 5, 32, 64])), [1, 2, 2, 1], 'VALID', name='d_conv4')
    a_4 = tf.nn.relu(h_4)

    h_5 = tf.layers.flatten(a_4)
    h_6 = tf.layers.dense(h_5, 100, tf.nn.relu)
    h_7 = tf.layers.dense(h_6, 10, tf.nn.relu)
    linear_out = tf.layers.dense(h_5, 2)

    return tf.nn.sigmoid(linear_out), linear_out

但是,当我尝试通过以下方式调用它时:

tf.reset_default_graph()

with tf.variable_scope("G") as scope:
    z = tf.placeholder(tf.float32, [None, Z_DIM], name='z')
    G = generator(z, is_training)

with tf.variable_scope('D') as scope:
    images = tf.placeholder(tf.float32, shape=[None, IMAGE_SIZE, IMAGE_SIZE, N_CHANNELS])

    D_real, D_real_logits = discriminator(images)
    scope.reuse_variables()
    D_fake, D_fake_logits = discriminator(G)

我收到以下错误:

ValueError                                Traceback (most recent call last)
<ipython-input-42-136b870612c0> in <module>()
 13     D_real, D_real_logits = discriminator(images)
 14     scope.reuse_variables()
---> 15     D_fake, D_fake_logits = discriminator(G)

<ipython-input-31-c2f5b722b345> in discriminator(image, is_training, batch_norms)
 16 
 17     h_5 = tf.layers.flatten(a_4)
---> 18     h_6 = tf.layers.dense(h_5, 100, tf.nn.relu)
 19     h_7 = tf.layers.dense(h_6, 10, tf.nn.relu)
 20     linear_out = tf.layers.dense(h_5, 2)

*[...] MANY MORE LINES*

ValueError: Variable D/dense_3/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?

如果仅存在一个密集层,并且按照建议的方式调用tf.AUTO_REUSE,则它将“起作用”。多层:它不起作用,声称张量大小有些不匹配。作为参考,generator是一个包含我的生成器层的函数。

为什么第二次调用该功能时这些密集层不存在?

0 个答案:

没有答案