我有以下代码段:
interpolates = alpha*real_data + ((1-alpha)*fake_data)
disc_interpolates = Discriminator(interpolates)
gradients = tf.gradients(disc_interpolates, [interpolates])[0]
second_grad = tf.gradients(gradients[0], [interpolates])[0]
def Discriminator(inputs):
output = tf.reshape(inputs, [-1, 3, 32, 32])
output = lib.ops.conv2d.Conv2D('Discriminator.1', 3, DIM, 5, output, stride=2)
output = LeakyReLU(output)
output = lib.ops.conv2d.Conv2D('Discriminator.2', DIM, 2*DIM, 5, output, stride=2)
output = LeakyReLU(output)
output = lib.ops.conv2d.Conv2D('Discriminator.3', 2*DIM, 4*DIM, 5, output, stride=2)
output = LeakyReLU(output)
output = tf.reshape(output, [-1, 4*4*4*DIM])
output = lib.ops.linear.Linear('Discriminator.Output', 4*4*4*DIM, 1, output)
return tf.reshape(output, [-1])
Discriminator
对应神经网络。第一次调用tf.gradients
工作正常,我在渐变变量中找回非零斜率值。但是,每当我尝试通过将tf.gradients
应用于渐变变量来找到二阶导数时,我的结果总是一个零向量。我不希望情况如此,因为它不应该是完全线性的,我是否错误地使用tf.gradients
?