具有批处理规范的GAN非常奇怪,鉴别器和生成器都为零损失

时间:2019-10-10 03:26:31

标签: tensorflow machine-learning keras deep-learning generative-adversarial-network

我正在使用tensorflow.keras训练DCGAN模型,并在生成器和鉴别器中添加了BatchNormalization层。 我按照以下步骤训练gan: 1.用真实图像和生成器生成的图像训练鉴别器(使用generator.predict) 2.训练对抗网络(与discriminator.trainable = False编译)

然后,我发现几轮之后,生成器和鉴别器的train_on_batch()返回的训练损失变为零。但是当我使用test_on_batch()时,生成器的损失仍然很大。而且生成的图像都很混乱。

起初我以为是因为在训练对抗网络时,在上述的2.步中,鉴别器的输入仅包含伪图像,使得批归一化层的分布与1.步不同,而提供了虚假和真实的图像。 但是,即使我删除了鉴别器中的所有批处理规范化层,仍然存在相同的问题。仅当删除所有批处理规范化层时,问题才会消失。我还发现Dropout层的存在没有任何区别。我想知道为什么即使在发电机中馈入具有相同分布的噪声的情况下,批次归一化也会导致此类问题。

# Model definition
class DCGAN_128:
    def __init__(self, hidden_dim):
        generator = M.Sequential()
        generator.add(L.Dense(128 * 8 * 8, input_shape=[hidden_dim]))
        generator.add(L.Reshape([8, 8, 128]))
        generator.add(L.UpSampling2D())  # [8, 8, 128]
        generator.add(L.Conv2D(128, kernel_size=3, padding="same"))  # [16, 16, 128]
        generator.add(L.LayerNormalization())  # 4
        generator.add(L.ReLU())
        generator.add(L.UpSampling2D())  # [32, 32, 128]
        generator.add(L.Conv2D(64, kernel_size=5, padding="same"))   # [32, 32, 64]
        generator.add(L.LayerNormalization())  # 8
        generator.add(L.ReLU())
        generator.add(L.UpSampling2D())  # [64, 64, 128]
        generator.add(L.Conv2D(32, kernel_size=7, padding="same"))   # [64, 64, 32]
        generator.add(L.LayerNormalization())  # 12
        generator.add(L.ReLU())
        generator.add(L.UpSampling2D())  # [128, 128, 32]
        generator.add(L.Conv2D(3, kernel_size=3, padding="same", activation=A.sigmoid))   # [128, 128, 3]

        discriminator = M.Sequential()
        discriminator.add(L.Conv2D(32, kernel_size=5, strides=2, padding="same", input_shape=[128, 128, 3]))
        discriminator.add(L.LeakyReLU())
        # discriminator.add(L.Dropout(0.25))  # [64, 64, 32]
        discriminator.add(L.Conv2D(64, kernel_size=3, strides=2, padding="same"))
        # discriminator.add(L.BatchNormalization(epsilon=1e-5))  # 4
        discriminator.add(L.LeakyReLU())
        # discriminator.add(L.Dropout(0.25))  # [32, 32, 64]
        discriminator.add(L.Conv2D(128, kernel_size=3, strides=2, padding="same"))
        discriminator.add(L.LayerNormalization())   # 8
        discriminator.add(L.LeakyReLU())    # [16, 16, 128]
        discriminator.add(L.Dropout(0.25))
        discriminator.add(L.Conv2D(256, kernel_size=3, strides=2, padding="same"))
        discriminator.add(L.LayerNormalization())   # 12
        discriminator.add(L.LeakyReLU())    # [8, 8, 256]
        discriminator.add(L.Dropout(0.25))
        discriminator.add(L.Conv2D(512, kernel_size=3, strides=2, padding="same"))
        discriminator.add(L.LeakyReLU())    # [4, 4, 512]
        discriminator.add(L.Flatten())
        discriminator.add(L.Dense(1, activation=A.sigmoid))
        self.model_gen = generator
        self.model_dis = discriminator

        self.adv_input = L.Input([hidden_dim])
        self.adv_output = discriminator(generator(self.adv_input))
        self.model_adversarial = M.Model(self.adv_input, self.adv_output)




# Training
dcgan = hidden_dim = 100
DCGAN_128(hidden_dim)
data_loader = AnimeFacesLoader([128, 128])
batch_size = 32
n_rounds = 40000
dis_model = dcgan.model_dis
gen_model = dcgan.model_gen
adv_model = dcgan.model_adversarial
gen_model.summary()
adv_model.summary()


dis_model.compile(Opt.Adam(0.0002), Lo.binary_crossentropy)
dis_model.trainable = False
adv_model.compile(Opt.Adam(0.0002), Lo.binary_crossentropy)

layer_outputs = [layer.output for layer in dis_model.layers]
visual_model = tf.keras.Model(dis_model.input, layer_outputs)



for rounds in range(n_rounds):
    # Get output images
    if rounds % 100 == 0 and rounds > 0:
        noise = np.random.uniform(-1, 1, [16, hidden_dim])
        tiled_images = np.zeros([4*128, 4*128, 3]).astype(np.uint8)
        generated_imgs = gen_model.predict(noise)
        generated_imgs *= 256
        generated_imgs = generated_imgs.astype(np.uint8)
        for i in range(16):
            tiled_images[int(i / 4)*128: int(i / 4)*128 + 128,
                         int(i % 4)*128: int(i % 4)*128 + 128, :] = generated_imgs[i, :, :, :]
        Image.fromarray(tiled_images).save("Output/DCGAN/" + "rounds_{0}.jpg".format(rounds))


    '''
        layer_visualization = visual_model.predict(generated_imgs[:1])
        for i in range(len(layer_visualization)):
            plt.imshow(layer_visualization[i][0, :, :, 0])
            plt.show()
    '''

    # train discriminator on real & fake images
    real_imgs = data_loader.get_batch(batch_size)
    real_ys = np.ones([batch_size, 1])
    noise = np.random.uniform(-1, 1, [batch_size, hidden_dim])
    fake_ys = np.zeros([batch_size, 1])
    fake_imgs = gen_model.predict(noise)
    imgs = np.concatenate([real_imgs, fake_imgs], axis=0)
    ys = np.concatenate([real_ys, fake_ys], axis=0)


    loss_dis = dis_model.train_on_batch(imgs, ys)
    print("Round {}, Loss dis:{:.4f}".format(rounds, loss_dis))
    loss_dis_test = dis_model.test_on_batch(imgs, ys)
    print(loss_dis_test)

    noise = np.random.uniform(-1, 1, [batch_size, hidden_dim])
    fake_ys = np.ones([batch_size, 1])

    loss_gen = adv_model.train_on_batch(noise, fake_ys)
    print("Round {}, Loss gen:{:.4f}".format(rounds, loss_gen))
    loss_gen_test = adv_model.test_on_batch(noise, fake_ys)
    print(loss_gen_test)

0 个答案:

没有答案