在测试期间,使用inception_v2会产生不同的批量大小

时间:2017-07-06 02:07:10

标签: testing tensorflow

我将用户inception_v2作为分类的基础网络。在训练期间,batchsize = 128。

在测试期间,如果batchsize = 128,一切正常。但是,如果batchsize小于128,则结果不同。随着批量大小的下降,精度会下降。如果batchsize = 1,则网络将失败。我还使用了inception_v3和inception_v1,出现了同样的问题。但是,如果基础网络被Alex网络(tensorflow)取代,一切顺利。我也用vgg(slim)取代了inception_v2,一切顺利。

我认为该bug与inception_v1~v3或批量规范化有关。也许我没有正确使用inception_v2。有没有人遇到类似的问题?

def net(inputs, num_classes=2, batch_size=128, dropout=0.8, height=224, width=224): with slim.arg_scope(inception_v2.inception_v2_arg_scope()): end_points = inception_v2.inception_v2(inputs=inputs, num_classes=1001,is_training=True, spatial_squeeze=False) kernel_size =inception_v2._reduced_kernel_size_for_small_input(end_points['Mixed_5c'], [7, 7]) net = slim.avg_pool2d(end_points['Mixed_5c'], kernel_size, padding='VALID', stride=1,scope='avgpool'.format(*kernel_size)) net = slim.dropout(net, keep_prob=dropout, scope='Dropout_2b') end_points['Dropout_2b'] = net # regresion layer with tf.variable_scope('Regresion') as scope: inputchannel = 1024 stddev = (2.0/inputchannel)**0.5 logits = slim.conv2d(end_points["Dropout_2b"], num_classes, [1, 1], activation_fn=None, normalizer_fn=None,weights_initializer=trunc_norm al(stddev), scope='regresion_layer') print("inception_v2 finished!") return logits, end_points

在测试期间,is_training = False

restore_list = [v for tf.trainable_variables()if(v.name.startswith(" InceptionV2"))] saver_googlenet = tf.train.Saver(var_list = restore_list)#var_list = restore_list saver_googlenet.restore(sess,' inception_v2.ckpt')

saver_all = tf.train.Saver(var_list = tf.global_variables(),max_to_keep = 20)

0 个答案:

没有答案