在张量流中定义一个暹罗网络

时间:2018-01-15 16:20:06

标签: tensorflow machine-learning computer-vision deep-learning conv-neural-network

我之前已经问过这个问题,但是我之前没有回答过一个特定的问题。

我试图在Tensorflow中定义一个Siamese网络,如下所示:

def conv(self, x, num_out_maps, ksize, stride, activation_fn=tf.nn.relu):
    padding_length = np.floor((ksize-1)/2).astype(np.int32)
    padded_input = tf.pad(x, [[0, 0], [padding_length, padding_length], [padding_length, padding_length], [0, 0]])
    return slim.conv2d(padded_input, num_out_maps, ksize, stride, padding='VALID', activation_fn=activation_fn)

def resconv(self, x, num_out_maps, ksize, stride):
    # Our residual block is: conv-relu-conv, then element-wise sum

    shortcut = None
    flag = tf.shape(x)[3] != num_out_maps or stride != 1

    conv1 = self.conv(x, num_out_maps, ksize, stride)
    conv2 = self.conv(conv1, num_out_maps, ksize, stride, activation_fn=None)

    if flag==1:
        shortcut = self.conv(x, num_out_maps, ksize, stride)
    else:
        shortcut = x

    return shortcut + conv2

def resblock(self, x, num_blocks, num_out_maps, ksize, stride):
    out = x
    for i in range(num_blocks):
        out = self.resconv(out, num_out_maps, ksize, stride)
    return out

def get_features(self, input_image):
    conv1 = self.conv(input_image, 32, 5, 2, activation_fn=None)
    # 5 residual blocks
    out = self.resblock(conv1, 5, 32, 3, 1)
    return out

def build_model1(self): # Siamese Code
    with tf.variable_scope('siamese', reuse=False):
        self.left_features = self.get_features(self.left)
    with tf.variable_scope('siamese', reuse=True):
        self.right_features = self.get_features(self.right)

如您所见,我在build_model1()函数中定义了Siamese网络。包含函数get_features()conv()resblock()resconv()已包含在内,以便正确理解。

我想问一下,我的实施是否正确?我一直看到人们使用tf.get_variable()初始化权重和偏差来定义一个连体网络(例如,在this SO问题的答案中)。我假设我不必使用tf.get_variable('weights', shape=(x, y), ..),因为slim.conv2d()可能在内部执行此操作。请帮我疑问。

0 个答案:

没有答案