批量归一化导致训练和推理损失之间的巨大差异

时间:2018-09-11 15:56:33

标签: python tensorflow batch-normalization

我按照Tensorflow网页上的说明进行操作,tf.layers.batch_normalization在训练时将training设置为True,在推理(有效和测试)时将False设置为2018-09-11 09:22:34: step 993, loss 1.23001, acc 0.488638 2018-09-11 09:22:35: step 994, loss 0.969551, acc 0.567364 2018-09-11 09:22:35: step 995, loss 1.31113, acc 0.5291 2018-09-11 09:22:35: step 996, loss 1.03135, acc 0.607861 2018-09-11 09:22:35: step 997, loss 1.16031, acc 0.549255 2018-09-11 09:22:36: step 998, loss 1.42303, acc 0.454694 2018-09-11 09:22:36: step 999, loss 1.33105, acc 0.496234 2018-09-11 09:22:36: step 1000, loss 1.14326, acc 0.527387 Round 4: valid Loading from valid, 1383 samples available 2018-09-11 09:22:36: step 1000, loss 44.3765, acc 0.000743037 2018-09-11 09:22:36: step 1000, loss 36.9143, acc 0.0100708 2018-09-11 09:22:37: step 1000, loss 35.2007, acc 0.0304909 2018-09-11 09:22:37: step 1000, loss 39.9036, acc 0.00510307 2018-09-11 09:22:37: step 1000, loss 42.2612, acc 0.000225067 2018-09-11 09:22:37: step 1000, loss 29.9964, acc 0.0230831 2018-09-11 09:22:37: step 1000, loss 28.1444, acc 0.00278473

但是,批次归一化总是使我在训练和有效损失之间产生巨大差异,例如:

2018-09-11 09:19:39: step 591, loss 0.967038, acc 0.630745
2018-09-11 09:19:40: step 592, loss 1.26836, acc 0.406095
2018-09-11 09:19:40: step 593, loss 1.33029, acc 0.536824
2018-09-11 09:19:41: step 594, loss 0.809579, acc 0.651354
2018-09-11 09:19:41: step 595, loss 1.41018, acc 0.491683
2018-09-11 09:19:42: step 596, loss 1.37515, acc 0.462998
2018-09-11 09:19:42: step 597, loss 0.972473, acc 0.663277
2018-09-11 09:19:43: step 598, loss 1.01062, acc 0.624355
2018-09-11 09:19:43: step 599, loss 1.13029, acc 0.53893
2018-09-11 09:19:44: step 600, loss 1.41601, acc 0.502889
Round 2: valid
Loading from valid, 1383 samples available
2018-09-11 09:19:44: step 600, loss 23242.2, acc 0.204348
2018-09-11 09:19:44: step 600, loss 22038, acc 0.196325
2018-09-11 09:19:44: step 600, loss 22223, acc 0.0991791
2018-09-11 09:19:44: step 600, loss 22039.2, acc 0.220871
2018-09-11 09:19:45: step 600, loss 25587.3, acc 0.155427
2018-09-11 09:19:45: step 600, loss 12617.7, acc 0.481486
2018-09-11 09:19:45: step 600, loss 17226.6, acc 0.234989
2018-09-11 09:19:45: step 600, loss 18530.3, acc 0.321573
2018-09-11 09:19:45: step 600, loss 21043.5, acc 0.157935
2018-09-11 09:19:46: step 600, loss 17232.6, acc 0.412151
2018-09-11 09:19:46: step 600, loss 28958.8, acc 0.297459
2018-09-11 09:19:46: step 600, loss 22603.7, acc 0.146518
2018-09-11 09:19:46: step 600, loss 29485.6, acc 0.266186
2018-09-11 09:19:46: step 600, loss 26039.7, acc 0.215589

有时甚至更糟(对于另一种型号):

def bn(inp, train_flag, name=None):
    return tf.layers.batch_normalization(inp, training=train_flag, name=name)

def gn(inp, groups=32):
    return tf.contrib.layers.group_norm(inp, groups=groups)

def conv(*args, padding='same', with_relu=True, with_bn=False,
         train_flag=None, with_gn=False, name=None, **kwargs):
    # inp, filters, kernel_size, strides
    use_bias = False if with_bn else True
    x = tf.layers.conv2d(*args, **kwargs, padding=padding,
                         kernel_initializer=xavier_initializer(),
                         use_bias=use_bias, name=name)
    if with_bn:
        bn_name = name+'/batchnorm' if name is not None else None
        x = bn(x, train_flag, name=bn_name)
    if with_gn: x = gn(x)
    if with_relu: x = relu(x)
    return x

我使用的批处理规范化代码:

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):

删除批处理归一化层后,训练和验证损失之间的巨大差异将消失。

以下代码用于优化。

momentum

无需重新学习就可以从头开始训练模型。

我遵循了问题Batch Normalization layer gives significant difference between train and validation loss on the exact same data,并尝试减少train_flag,但也无济于事。

我想知道为什么会这样。如果您能给我一些建议,我非常感谢。

已添加:[RoutePrefix("api")] public class MyController: ApiController { [HttpPost] [Route("*")] public string PostHandler([FromBody]string jsonBody) { var requestPath = Request.RequestUri.LocalPath; return jsonBody; } } 是整个模型中使用的占位符。

2 个答案:

答案 0 :(得分:2)

由于您没有提供完整的代码或其链接,因此我需要询问以下内容:

  

您如何喂养train_flag?

正确的方法是将train_flag设置为tf.Placeholder。还有其他方法,但这是最简单的方法。然后,您可以使用简单的python bool来填充它。

如果您在培训期间手动设置train_flag=True并在验证期间将其设置train_flag=False,则可能是问题的根源。在您的代码中看不到reuse=tf.AUTO_REUSE。这意味着在验证期间,当您设置train_flag=False时,将创建一个单独的图层,该图层不会与训练期间使用的上一个图层共享权重。

当您不使用批量归一化时,问题消失的原因在于,在这种情况下,不需要对卷积层使用train_flag。这样就可以了。

这是基于观察结果的推测。

答案 1 :(得分:1)

就我而言,我错误地只打了一次update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)

对于多个GPU,需要在tf.get_collection(tf.GraphKeys.UPDATE_OPS)之前和每个子网定义之后为每个GPU调用compute_gradients。此外,在合并了所有子网塔之后,还需要在apply_gradients之前再次调用它。

另一种方法是,在定义了整个网络(包括所有子网)之后,请调用update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)以获取当前的update_ops。在这种情况下,我们需要两个for循环,一个用于定义sebnetwork,一个用于计算梯度。

示例如下:

# Multiple GPUs
tmp, l = [], 0
for i in range(opt.gpu_num):
    r = min(l + opt.batch_split, opt.batchsize)
    with tf.device('/gpu:%d' % i), \
         tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):

        print("Setting up networks on GPU", i)
        inp_ = tf.identity(inps[l:r])
        label_ = tf.identity(labels[l:r])
        for j, val in enumerate(setup_network(inp_, label_)): # loss, pred, accuracy
            if i == 0: tmp += [[]] # [[], [], []]
            tmp[j] += [val]
    l = r

tmp += [[]]
# Calculate update_ops after the network has been defined
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # possible batch normalization
for i in range(opt.gpu_num):
    with tf.device('/gpu:%d' % i), \
         tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):

         print("Setting up gradients on GPU", i)
         tmp[-1] += [setup_grad(optim, tmp[0][i])]

添加:

我还添加了setup_grad函数

def setup_grad(optim, loss):
    # `compute_gradients`` will only run after update_ops have executed
    with tf.control_dependencies(update_ops):
        update_vars = None
        if opt.to_train is not None:
            update_vars = [tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=s)
                           for s in opt.to_train]
        total_loss = loss[0] + opt.seg_weight * loss[1]
        return optim.compute_gradients(total_loss, var_list=update_vars)

和以后的apply_gradients作为参考。

# `apply_gradients`` will only run after update_ops have executed
with tf.control_dependencies(update_ops):
    if opt.clip_grad: grads = [(tf.clip_by_value(grad[0], -opt.clip_grad, opt.clip_grad), grad[1]) \
                                if grad[0] is not None else grad for grad in grads]
    train_op = optim.apply_gradients(grads, global_step=global_step)

如果每个GPU上的批处理大小较小,则由于Tensorflow当前不支持GPU之间的同步批处理规范化层数据,因此批处理规范化可能对性能没有帮助。