如何解冻Inception v3中的图层以进行转移学习?

时间:2018-05-18 16:02:36

标签: python tensorflow

tutorial之后,我可以在最后一层训练我自己的图像。我想知道我是否可以在多层培训?

该教程中的Inception v3的结构如下:

enter image description here

我的想法是从 @Test public void test() throws Exception { ObjectMapper mapper = new ObjectMapper().enable(SerializationFeature.INDENT_OUTPUT); String s = "{ \"a\" : \"b\" ,\n \"c\" : \"d\"}"; Object x = mapper.readValue(s, Object.class); ObjectWriter w = mapper.writer(); // Indented System.out.println(w.writeValueAsString(x)); // Single Line System.out.println(w.without(SerializationFeature.INDENT_OUTPUT).writeValueAsString(x)); } 而不是mixed_9添加瓶颈,然后在pool_3函数中重新添加mixed_10pool_3的结构retrain.py

因此,我在add_final_training_ops()之前调用此函数。

add_final_training_ops()

def repeate_last_two_layers(bottleneck_tensor, bottleneck_tensor_size): """Adds the last mixed_10 and pool_3 into training.""" # Use bottleneck to freeze all layers before this one with tf.name_scope('new_input'): bottle_in = tf.placeholder_with_default( bottleneck_tensor, shape=[None, 8, 8, bottleneck_tensor_size], name='NewBottleneckInputPlaceholder' ) # Use bottleneck input to re-construct mixed_10 end_point = 'Mixed_7c' with tf.name_scope(end_point): with tf.variable_scope('Branch_0'): branch_0 = slim.conv2d(bottle_in, 320, [1, 1], scope='Conv2d_0a_1x1') with tf.variable_scope('Branch_1'): branch_1 = slim.conv2d(bottle_in, 384, [1, 1], scope='Conv2d_0a_1x1') branch_1 = tf.concat(axis=3, values=[ slim.conv2d(branch_1, 384, [1, 3], scope='Conv2d_0b_1x3'), slim.conv2d(branch_1, 384, [3, 1], scope='Conv2d_0c_3x1')]) with tf.variable_scope('Branch_2'): branch_2 = slim.conv2d(bottle_in, 448, [1, 1], scope='Conv2d_0a_1x1') branch_2 = slim.conv2d( branch_2, 384, [3, 3], scope='Conv2d_0b_3x3') branch_2 = tf.concat(axis=3, values=[ slim.conv2d(branch_2, 384, [1, 3], scope='Conv2d_0c_1x3'), slim.conv2d(branch_2, 384, [3, 1], scope='Conv2d_0d_3x1')]) with tf.variable_scope('Branch_3'): branch_3 = slim.avg_pool2d(bottle_in, [3, 3], scope='AvgPool_0a_3x3') branch_3 = slim.conv2d( branch_3, 192, [1, 1], scope='Conv2d_0b_1x1') net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3]) # Global pooling with tf.name_scope('NewPool_3'): net = tf.reduce_mean(net, [1, 2], keep_dims=True, name='GlobalPool') # 1 x 1 x 2048 net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b') # 2048 logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, normalizer_fn=None, scope='Conv2d_1c_1x1') logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze') return logits Mixed_10的结构是指inception_v3.py中的定义。

它现在给我错误:

  

ValueError:两个形状中的尺寸1必须相等,但是为8和2.形状为[?,8,8]和[?,2,2]。 for' Mixed_7c / Branch_3 / concat' (op:' ConcatV2')输入形状:[?,8,8,320],[?,8,8,768],[?,8,8,768],[?,2,2,192],[]并且使用计算输入张量:输入4 =< 3>。

我的问题是:1。这是重新训练更多层的正确方法吗? 2.如何在图表的末尾复制pool_3Mixed_10的结构?

任何建议都表示赞赏!

0 个答案:

没有答案