tf.layers.conv2d和tf.nn.conv2d具有相同架构的不同输出

时间:2018-04-06 20:45:02

标签: python tensorflow neural-network convolution

注意:我阅读了类似的帖子here,但它并不涵盖我的用例。

我正在构建一个GAN并且正在将我的鉴别器设计转换为使用tf.nn.conv2d(遵循一些示例代码)到tf.layers.conv2d。两种设计都使用相同的输入,内核大小,步幅,但我在两者之间得到了不同的结果。

两个版本都应该是28x28x1输入 - > conv2d用5x5内核,2步,16层,漏泄relu - > conv2d与3x3内核,2步,32层,漏泄relu - >压平至7 * 7 * 32 - >具有泄漏relu的256神经元密集网络 - > 1个值输出。

我检查了重量初始化。 tf.layers.conv2d默认为 xaiver init如图here所示。

图层版本:

def discriminator(x):
# Reshape to a 28x28 image with one layer of depth (greyscale)
x = tf.reshape(x, shape=[-1, 28, 28, 1])

with tf.variable_scope('discriminator',  reuse=tf.AUTO_REUSE) as scope:

    # Defaults to Xavier init for weights and Zeros for bias
    disc_conv1 = tf.layers.conv2d(
        inputs = x,
        filters = 16,
        kernel_size=5,
        strides=2,
        padding="same",
        activation=tf.nn.leaky_relu
    )
    disc_conv2 = tf.layers.conv2d(
        inputs = disc_conv1,
        filters = 32,
        kernel_size=3,
        strides=2,
        padding="same",
        activation=tf.nn.leaky_relu
    )
    disc_conv2 = tf.reshape(disc_conv2, shape=[-1, 7 * 7 * 32])
    disc_h1 = tf.layers.dense(disc_conv2, units=hidden1_dim, activation=tf.nn.leaky_relu)
    disc_logits = tf.layers.dense(disc_h1, units=1)
    disc_out = tf.nn.sigmoid(disc_logits)

return disc_logits, disc_out

nn版本:

DC_D_W1 = tf.get_variable('DC_D_W1', shape=[5, 5, 1, 16], initializer=tf.contrib.layers.xavier_initializer())
DC_D_b1 = tf.get_variable('2', initializer=tf.zeros(shape=[16]))

DC_D_W2 = tf.get_variable('3', shape=[3, 3, 16, 32], initializer=tf.contrib.layers.xavier_initializer())
DC_D_b2 = tf.get_variable('4', initializer=tf.zeros(shape=[32]))

DC_D_W3 = tf.get_variable('5', shape=[7 * 7 * 32, 256],  initializer=tf.contrib.layers.xavier_initializer())
DC_D_b3 = tf.get_variable('6', initializer=tf.zeros(shape=[256]))

DC_D_W4 = tf.get_variable('7', shape= [256, 1],  initializer=tf.contrib.layers.xavier_initializer())
DC_D_b4 = tf.get_variable('8', initializer=tf.zeros(shape=[1]))

theta_DC_D = [DC_D_W1, DC_D_b1, DC_D_W2, DC_D_b2, DC_D_W3, DC_D_b3, DC_D_W4, DC_D_b4]

def discriminator(x):
    x = tf.reshape(x, shape=[-1, 28, 28, 1])
    conv1 = tf.nn.leaky_relu(tf.nn.conv2d(x, DC_D_W1, strides=[1, 2, 2, 1], padding='SAME') + DC_D_b1)
    conv2 = tf.nn.leaky_relu(tf.nn.conv2d(conv1, DC_D_W2, strides=[1, 2, 2, 1], padding='SAME') + DC_D_b2)
    conv2 = tf.reshape(conv2, shape=[-1, 7 * 7 * 32])
    h = tf.nn.leaky_relu(tf.matmul(conv2, DC_D_W3) + DC_D_b3)
    logit = tf.matmul(h, DC_D_W4) + DC_D_b4
    prob = tf.nn.sigmoid(logit)

    return logit, prob

0 个答案:

没有答案