tf.nn.l2_loss和tf.contrib.layers.l2_regularizer是否与在张量流中添加L2正则化的目的相同?

时间:2017-06-10 13:58:41

标签: python tensorflow deep-learning

似乎可以通过两种方式实现张量流中的L2正则化:

(i)使用tf.nn.l2_loss或 (ii)使用tf.contrib.layers.l2_regularizer

这两种方法是否都有同样的目的?如果它们不同,它们有什么不同?

1 个答案:

答案 0 :(得分:7)

他们做同样的事情(至少现在)。唯一的区别是tf.contrib.layers.l2_regularizertf.nn.l2_loss的结果乘以scale

查看tf.contrib.layers.l2_regularizer [https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/contrib/layers/python/layers/regularizers.py]

的实施情况
def l2_regularizer(scale, scope=None):
  """Returns a function that can be used to apply L2 regularization to weights.
  Small values of L2 can help prevent overfitting the training data.
  Args:
    scale: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
    scope: An optional scope name.
  Returns:
    A function with signature `l2(weights)` that applies L2 regularization.
  Raises:
    ValueError: If scale is negative or if scale is not a float.
  """
  if isinstance(scale, numbers.Integral):
    raise ValueError('scale cannot be an integer: %s' % (scale,))
  if isinstance(scale, numbers.Real):
    if scale < 0.:
      raise ValueError('Setting a scale less than 0 on a regularizer: %g.' %
                       scale)
    if scale == 0.:
      logging.info('Scale of 0 disables regularizer.')
      return lambda _: None

  def l2(weights):
    """Applies l2 regularization to weights."""
    with ops.name_scope(scope, 'l2_regularizer', [weights]) as name:
      my_scale = ops.convert_to_tensor(scale,
                                       dtype=weights.dtype.base_dtype,
                                       name='scale')
      return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)

  return l2

您感兴趣的行是:

  return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)

因此,在实践中,tf.contrib.layers.l2_regularizer会在内部调用tf.nn.l2_loss,只需将结果乘以scale参数。