张量流中预训练的vgg的批次归一化

时间:2019-03-06 17:39:24

标签: tensorflow batch-normalization

我对如何在Tensorflow中实现批次规范化有一个幼稚的问题。感谢您的解释,示例代码和链接。

要使用 dropout ,我们可以在调用模型时确定 dropping 的数量作为输入,如下所示:

with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
    model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)

1-我们是否有类似的东西可以使用批处理规范化?

2-,如果我想实现此链接https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization的说明,是否需要更改网络代码,即https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py

OR

我可以在调用模型之前先使用my_inputs_norm = tf.layers.batch_normalization(x, training=training) ,像这样:

**my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
    model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)

0 个答案:

没有答案