为什么仅当is_training为true时,tf.layers.batch_normalization才能在推理模式下工作?

时间:2018-08-22 01:44:42

标签: tensorflow

def _batch_norm(self, name, x, is_training=True):
 """Batch normalization.
  Considering the performance, we use batch_normalization in contrib/layers/python/layers/layers.py
  instead of tf.nn.batch_normalization and set fused=True
  Args:
    x: input tensor
    is_training: Whether to return the output in training mode or in inference mode, use the argment
                 in finetune
 """
 with tf.variable_scope(name):
   return tf.layers.batch_normalization(
          inputs=x,
          axis=1 if self._data_format == 'NCHW' else 3,
          momentum = 0.997,
          epsilon = 1e-5,
          center=True,
          scale=True,
          training=is_training,
          fused=True
          )

这是我的batch_norm代码。在训练模型时,我将使用is_training设置参数True,但是当我要推论模型时,如果我将模型设置为False,则会输出{{1 }}。但是,如果我使用wrong value进行设置,它将在推理模式下工作,有人知道为什么吗?

1 个答案:

答案 0 :(得分:0)

https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization上看到蓝色的大“注释”框

您需要运行更新操作(例如,将更新操作传递给Session),否则移动平均值/方差将不会更新,因此无法正确推断。