鉴别器和生成器模型从训练开始就都给出了0损失

时间:2020-05-21 04:33:54

标签: python tensorflow keras generative-adversarial-network

 I have been learning GANs for a while and decided to built a Face generator. GAN model has following  flow:
 1) An image input.
 2) Extract features from the input image and convert it to 1x128 latent space.
 3) Feed that vector to Generator and obtain a 128x128x3 image
 4) Feed it to Discriminator for classification.
 But from the beginning of training it is showing 0 loss for both Generator as well as Discriminator.
   def define_dis():
  init = RandomNormal(stddev=0.02)

  input_shape = (128, 128, 3)
  input_layer = Input(shape=input_shape)

  dis = Conv2D(32, kernel_size=5, padding='same', kernel_initializer=init) (input_layer)
  dis = LeakyReLU(0.2)(dis)

  dis = Conv2D(64, kernel_size=5, padding='same',kernel_initializer=init)(dis)
  dis = BatchNormalization()(dis)
  dis = LeakyReLU(0.2)(dis)

  dis = Conv2D(128, kernel_size=5, padding='same',kernel_initializer=init)(dis)
  dis = BatchNormalization()(dis)
  dis = LeakyReLU(0.2)(dis)

  dis = Flatten()(dis)
  output = Dense(1, activation='sigmoid')(dis)

  model = Model(input_layer, output)
  opt = Adam(lr=0.0002, beta_1=0.5, beta_2=0.99, epsilon=10e-8)
  model.compile(loss='binary_crossentropy', optimizer=opt)

  return model

#Generator Model
    def define_gen(latent_dim = 128):
     init = RandomNormal(stddev=0.02)

     input_layer = Input(shape=(latent_dim))
     n_nodes = 8*8*128

     g = Dense(n_nodes)(input_layer)
     g = LeakyReLU(0.2)(g)
     g = Dropout(0.2)(g)

     g = Reshape((8, 8, 128))(g)

     g = UpSampling2D(size=(2, 2))(g)
     g = Conv2D(128, kernel_size=5, padding='same', kernel_initializer=init)(g)
     g = BatchNormalization()(g)
     g = LeakyReLU(0.2)(g)

     g = UpSampling2D(size=(2, 2))(g)
     g = Conv2D(64, kernel_size=5, padding='same', kernel_initializer=init)(g)
     g = BatchNormalization()(g)
     g = LeakyReLU(0.2)(g)

     g = UpSampling2D(size=(2, 2))(g)
     g = Conv2D(32, kernel_size=5, padding='same', kernel_initializer=init)(g)
     g = BatchNormalization()(g)
     g = LeakyReLU(0.2)(g)

     g = UpSampling2D(size=(2, 2))(g)
     output = Conv2D(3, kernel_size=5, activation='tanh' ,padding='same', kernel_initializer=init)(g)

     model = Model(input_layer, output)
     return model

#Feature extractor model
    def define_fr_model(input_shape):
     vgg = VGG19(include_top=False, weights='imagenet', input_shape=input_shape, pooling='avg')

     x_input = vgg.input
     x_output = vgg.layers[-1].output
     out = Dense(128)(x_output)

     model = Model(x_input, out)
     return model```

0 个答案:

没有答案