tensorflow slim batchnorm layer没有更新操作创建

时间:2018-04-04 09:31:26

标签: tensorflow

我试图使用slim.batch_norm图层,我确定我将is_training设置为true,但是在构造了batchnorm图层后,tf.get_collections(tf.GraphKeys.UPDATE_OPS)返回空列表,真的很困惑,因为有文档在slim.batch_norm

  Note: when training, the moving_mean and moving_variance need to be updated.
  By default the update ops are placed in `tf.GraphKeys.UPDATE_OPS`

有人可以帮忙吗?

2 个答案:

答案 0 :(得分:1)

创建优化程序时,应按以下方式执行:

optimizer = tf.train.AdagradOptimizer(SOME_LEARNING_RATE)
gradients = optimizer.compute_gradients(loss=loss, var_list=variables)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
     train_step = optimizer.apply_gradients(grads_and_vars=gradients)

答案 1 :(得分:-1)

我猜您在执行batch_norm之前要放置tf.get_collections(tf.GraphKeys.UPDATE_OPS)。因此,最终没有任何更新。