在自定义层中不存在用于偏差的Tensorflow梯度

时间:2019-11-30 21:05:37

标签: python tensorflow keras

我已在this ArXiv paper之后的Tensorflow中建立了一个输入凸神经网络,它是一个标量输出前馈模型。第一个隐藏层很密集,而自定义的后续层需要两个输入:上一层(内核)的输出和模型输入(直通)。将单独的权重应用于每个。这允许将正权重正则化器应用于内核权重,但不能应用于直通。我计算了正则化器,并在自定义层的self.add_loss方法中使用call将其添加。我还使用了自定义激活函数,它们是泄漏的ReLU和泄漏的ReLU的平方。

当我训练该网络时,我能够为第一个密集层中的偏差计算梯度,但是我收到警告,自定义层中的偏差不存在梯度。当我在激活函数中添加@tf.function时,警告消失,但渐变为0。此外,当我使用loss.numpy()并在本地Jupyter笔记本中运行时,@tf.function引发错误(但不是)在Colab中)。

有什么想法为什么对于密集层而不是自定义层存在偏差梯度,以及如何计算所有层的偏差梯度? Colab notebook中提供了一个最小的工作示例。非常感谢!

下面是我的自定义图层。它与标准的致密层非常相似。

class DensePartiallyConstrained(Layer):
    '''
    A custom layer inheriting from `tf.keras.layers.Layers` class.
    This class is a fully-connected layer with two inputs. This allows
    for different constraints on the weights of each input. This enables
    a passthrough of the inputs to each hidden layer to have no
    weight constraints while the input from the previous layer can have
    a positive constraint. It also allows for different initializations
    of the weight values for each input.

    Most of this code and documentation was borrowed from the
    `tf.keras.layers.Dense` documentation on Github (thanks!).
    '''
    def __init__(self,
                 units,
                 activation = None,
                 use_bias = True,
                 kernel_initializer = 'glorot_uniform',
                 passthrough_initializer = 'glorot_uniform',
                 bias_initializer = 'zeros',
                 kernel_constraint = None,
                 passthrough_constraint = None,
                 bias_constraint = None,
                 activity_regularizer = None,
                 regularizer_constant = 1.0,
                 **kwargs):

        if 'input_shape' not in kwargs and 'input_dim' in kwargs:
            kwargs['input_shape'] = (kwargs.pop('input_dim'),)

        super(DensePartiallyConstrained, self).__init__(
                activity_regularizer = regularizers.get(activity_regularizer), **kwargs)

        self.units = int(units)
        self.activation = activations.get(activation)
        self.use_bias = use_bias
        self.kernel_initializer = initializers.get(kernel_initializer)
        self.passthrough_initializer = initializers.get(passthrough_initializer)
        self.bias_initializer = initializers.get(bias_initializer)
        self.kernel_constraint = constraints.get(kernel_constraint)
        self.passthrough_constraint = constraints.get(passthrough_constraint)
        self.bias_constraint = constraints.get(bias_constraint)

        # This is for add_loss in call() method
        self.regularizer_constant = regularizer_constant

        # What does this do?
        self.supports_masking = True

        self.kernel_input_spec = InputSpec(min_ndim=2)
        self.passthrough_input_spec = InputSpec(min_ndim=2)


    def build(self, input_shape):
        # Input shapes provided as list [kernel, passthrough]
        kernel_input_shape, passthrough_input_shape = input_shape

        # Check for proper datatype
        dtype = dtypes.as_dtype(self.dtype or K.floatx())
        if not (dtype.is_floating or dtype.is_complex):
          raise TypeError('Unable to build `DensePartiallyConstrained` layer with non-floating point '
                          'dtype %s' % (dtype,))

        # Check kernel input dimensions
        kernel_input_shape = tensor_shape.TensorShape(kernel_input_shape)
        if tensor_shape.dimension_value(kernel_input_shape[-1]) is None:
          raise ValueError('The last dimension of the inputs to `DensePartiallyConstrained` '
                           'should be defined. Found `None`.')
        kernel_last_dim = tensor_shape.dimension_value(kernel_input_shape[-1])
        self.kernel_input_spec = InputSpec(min_ndim=2,
                                    axes={-1: kernel_last_dim})

        # Check passthrough input dimensions
        passthrough_input_shape = tensor_shape.TensorShape(passthrough_input_shape)
        if tensor_shape.dimension_value(passthrough_input_shape[-1]) is None:
          raise ValueError('The last dimension of the inputs to `DensePartiallyConstrained` '
                           'should be defined. Found `None`.')
        passthrough_last_dim = tensor_shape.dimension_value(passthrough_input_shape[-1])
        self.passthrough_input_spec = InputSpec(min_ndim=2,
                                    axes={-1: passthrough_last_dim})

        # Add weights to kernel (between layer connections)
        self.kernel = self.add_weight(name = 'kernel',
                                      shape = [kernel_last_dim, self.units],
                                      initializer = self.kernel_initializer,
                                      constraint = self.kernel_constraint,
                                      dtype = self.dtype,
                                      trainable = True)
        # Add weight to input passthrough
        self.passthrough = self.add_weight(name = 'passthrough',
                                      shape = [passthrough_last_dim, self.units],
                                      initializer = self.passthrough_initializer,
                                      constraint = self.passthrough_constraint,
                                      dtype = self.dtype,
                                      trainable = True)
        # Add weights to bias
        if self.use_bias:
            self.bias = self.add_weight(name = 'bias',
                                        shape = [self.units,],
                                        initializer = self.bias_initializer,
                                        constraint = self.bias_constraint,
                                        dtype = self.dtype,
                                        trainable = True)
        else:
            self.bias = None

        self.built = True

        super(DensePartiallyConstrained, self).build(input_shape)


    def call(self, inputs):
        # Inputs provided as list [kernel, passthrough]
        kernel_input, passthrough_input = inputs

        # Calculate weights regularizer
        self.add_loss(self.regularizer_constant * tf.reduce_sum(tf.square(tf.math.maximum(tf.negative(self.kernel), 0.0))))

        # Calculate layer output
        outputs = tf.add(tf.matmul(kernel_input, self.kernel), tf.matmul(passthrough_input, self.passthrough))

        if self.use_bias:
            outputs = tf.add(outputs, self.bias)

        if self.activation is not None:
            return self.activation(outputs)
        return outputs

我的激活功能:

#@tf.function
def squared_leaky_ReLU(x, alpha = 0.2):
    return tf.square(tf.maximum(x, alpha * x))
#@tf.function
def leaky_ReLU(x, alpha = 0.2):
    return tf.maximum(x, alpha * x)

编辑: 通过tensorflow更新,现在可以将loss.numpy()与我的激活功能一起使用时,可以访问@tf.function。这会为我所有自定义图层中的偏差返回0个渐变。

我开始认为自定义层中偏置项缺少梯度可能与我的损失函数有关: minimax loss 哪里 regularizer 是仅在自定义层内核中权重的正则化。 g(x)的损耗基于相对于输入的梯度,因此它不包含有关偏差的任何信息(f(x)中的偏差通常会更新)。不过,如果是这种情况,我不明白为什么g(y)的第一个隐藏密集层中的偏差会被更新?除了f(x)对内核权重具有正约束之外,网络是相同的。

0 个答案:

没有答案