TensorFlow:当is_training = False时,Batch Norm会中断网络

时间:2017-05-26 23:25:05

标签: tensorflow

我试图使用TensorFlow-Slim的批量规范层,如下所示:

net = ...
net = slim.batch_norm(net, scale = True, is_training = self.isTraining,
    updates_collections = None, decay = 0.9)
net = tf.nn.relu(net)
net = ...

我训练:

self.optimizer = slim.learning.create_train_op(self.model.loss,
    tf.train.MomentumOptimizer(learning_rate = self.learningRate,
    momentum = 0.9, use_nesterov = True)

optimizer = self.sess.run([self.optimizer],
    feed_dict={self.model.isTraining:True})

我用以下内容加载已保存的权重:

net = model.Model(sess,width,height,channels,weightDecay)

savedWeightsDir = './savedWeights/'
saver = tf.train.Saver(max_to_keep = 5)
checkpointStr = tf.train.latest_checkpoint(savedWeightsDir)
sess.run(tf.global_variables_initializer())
saver.restore(sess, checkpointStr)
global_step = tf.contrib.framework.get_or_create_global_step()

我推断:

inf = self.sess.run([self.softmax],
    feed_dict = {self.imageBatch:imageBatch,self.isTraining:False})

当然我遗漏了很多并解释了一些代码,但我认为这就是批量规范所触及的。奇怪的是,如果我设置isTraining:True,我会得到更好的结果。可能是加载权重的东西 - 也许批量标准值没有保存?代码中有什么明显的错误吗?谢谢。

1 个答案:

答案 0 :(得分:0)

我刚遇到同一问题,发现solution here。 该问题来自tf.layers.batch_normalization层,该层需要更新 moving_mean moving_variance

为了正确执行此操作,您需要将培训过程修改为:

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
    self.optimizer = slim.learning.create_train_op(self.model.loss,
      tf.train.MomentumOptimizer(learning_rate = self.learningRate,
      momentum = 0.9, use_nesterov = True)

或更普遍地,来自documentation

  x_norm = tf.layers.batch_normalization(x, training=training)

  # ...

  update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
  with tf.control_dependencies(update_ops):
    train_op = optimizer.minimize(loss)