如何在Keras的功能API中实现infoGAN的损失功能

时间:2018-12-19 22:13:44

标签: python tensorflow machine-learning keras deep-learning

我一直在尝试基于this创建一个infoGAN,但是它只是纯张量流,我真的不知道该如何在Keras(最好是功能性API)中实现Q_loss。

这是我到目前为止所拥有的:

def G_loss(y_true, y_pred):
  return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y_pred,
                                                                labels=y_true))

def D_loss(y_true, y_pred):
  return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y_pred,
                                                                labels=y_true))

def Q_loss(y_true, y_pred):
  return ???

def get_generator():
  inputs = tf.keras.Input(shape=(noise_dim+c_dim,))
  x = tf.keras.layers.Dense(256, kernel_initializer=tf.keras.initializers.RandomNormal(stddev=0.02))(inputs)
  x = tf.keras.layers.LeakyReLU(0.2)(x)
  x = tf.keras.layers.Dense(512)(x)
  x = tf.keras.layers.LeakyReLU(0.2)(x)
  out = tf.keras.layers.Dense(784, activation=tf.nn.tanh)(x)
  generator = tf.keras.Model(inputs=inputs, outputs=out)
  generator.compile(optimizer=get_optimizer(), loss=G_loss)
  return generator

def get_discriminator():
  inputs = tf.keras.Input(shape=(784,))
  x = tf.keras.layers.Dense(1024, kernel_initializer=tf.keras.initializers.RandomNormal(stddev=0.02), \
                                  kernel_constraint=clipping(0.01), bias_constraint=clipping(0.01))(inputs)
  x = tf.keras.layers.LeakyReLU(0.2)(x)
  x = tf.keras.layers.Dense(512, kernel_constraint=clipping(0.01),
                                 bias_constraint=clipping(0.01))(x)
  x = tf.keras.layers.LeakyReLU(0.2)(x)
  x = tf.keras.layers.Dense(256, kernel_constraint=clipping(0.01),
                                 bias_constraint=clipping(0.01))(x)
  x = tf.keras.layers.LeakyReLU(0.2)(x)
  critic_output = tf.keras.layers.Dense(1, kernel_constraint=clipping(0.01),
                                           bias_constraint=clipping(0.01), name="critic_output")(x)
  x = tf.keras.layers.Dense(256, kernel_constraint=clipping(0.01),
                          bias_constraint=clipping(0.01))(critic_output)
  x = tf.keras.layers.LeakyReLU(0.2)(x)
  outputs = tf.keras.layers.Dense(c_dim, kernel_constraint=clipping(0.01),
                            bias_constraint=clipping(0.01))(x)
  discriminator = tf.keras.Model(inputs=inputs, outputs=[outputs, critic_output])
  discriminator.compile(optimizer=get_optimizer(), loss=D_loss)
  return discriminator

def get_gan(discriminator, generator):
  discriminator.trainable = False
  gan_input = tf.keras.Input(shape=(noise_dim+c_dim, ))
  gan_output = discriminator(generator(gan_input))
  gan = tf.keras.Model(inputs=gan_input, outputs=gan_output)
  gan.compile(optimizer=get_optimizer(), loss=Q_loss)
  return gan

generator = get_generator()
discriminator = get_discriminator()
full_gan = get_gan(discriminator, generator)

在infoGAN中,鉴别器有一个损失函数(此处,鉴别器的实际输出称为“ critic_output”),后面连接的Q网有一个损失函数(此处,comcritor_output层之后的两层) )。

需要G_loss值,例如此处(来自开头链接的代码)

# Entropy of Q: lambda*L(G,Q)
q_H = tf.reduce_mean(lambd*tf.nn.sigmoid_cross_entropy_with_logits(logits = tf.nn.softmax(Qcx), 
                                                               labels = c_sim))
# infoGAN loss function: Loss = V(D,G) - lambda*L(G,Q)
q_loss = tf.abs((g_loss - q_H))

这是我不知道如何在Keras中实现的东西,我已经找到了如何在鉴别器网络中实现多个输出。它具有注释器输出(计算G_loss所需的值)和Q净输出。但是我将如何实现此q_loss?

我还没有尝试过,但是我知道keras的损失函数的包装函数需要额外的参数(例如):

def G_loss(lambd):
    def loss(y_true, y_pred):
      return lambd * tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y_pred,
                                                            labels=y_true))
return loss

虽然我已经读到它仅适用于常数(超参数)。我如何给它G_loss函数的值?

0 个答案:

没有答案