将卷积层从Caffe转换为Tensorflow

时间:2016-12-12 10:46:25

标签: machine-learning neural-network tensorflow caffe

我正在尝试在Tensorflow中实现Richard Zhang的着色模型(http://richzhang.github.io/colorization/),并且我一直致力于实现该体系结构。我基本上试图将作者的Caffe模型转换为Tensorflow:虽然我能够完成大部分转换,但我的架构似乎是错误的(优化步骤中的维度问题)。我一直在尝试调试它为什么不起作用,结果很少,我想退后一步,确保我正确地实现了模型。

例如,这是作者的卷积层之一:

layer {
  name: "bw_conv1_1"
  type: "Convolution"
  bottom: "data_l"
  top: "conv1_1"
  # param {lr_mult: 0 decay_mult: 0} 
  # param {lr_mult: 0 decay_mult: 0} 
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu1_1"
  type: "ReLU"
  bottom: "conv1_1"
  top: "conv1_1"
}
layer {
  name: "conv1_2"
  type: "Convolution"
  bottom: "conv1_1"
  top: "conv1_2"
  # param {lr_mult: 0 decay_mult: 0} 
  # param {lr_mult: 0 decay_mult: 0} 
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "relu1_2"
  type: "ReLU"
  bottom: "conv1_2"
  top: "conv1_2"
}
layer {
  name: "conv1_2norm"
  type: "BatchNorm"
  bottom: "conv1_2"
  top: "conv1_2norm"
  batch_norm_param{ }
  param {lr_mult: 0 decay_mult: 0}
  param {lr_mult: 0 decay_mult: 0}
  param {lr_mult: 0 decay_mult: 0}
}    

这是我尝试相应的Tensorflow代码:

# some initial functions
def conv(x, W, stride):
    return tf.nn.conv2d(x, W, strides=[1, stride, stride, 1],
    padding='SAME')

def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

# conv layer 1 code
Wconv1_1 = weight_variable([3, 3, 1, 64])
bconv1_1 = bias_variable([64])

# x is the input value with shape [n, 256, 256, 1]
Rconv1_1 = tf.nn.relu(conv(x, Wconv1_1, 1) + bconv1_1)

Wconv1_2 = weight_variable([3, 3, 64, 64])
bconv1_2 = bias_variable([64])

Rconv1_2 = tf.nn.relu(conv(Rconv1_1, Wconv1_2, 2) + bconv1_2)

Rnorm1 = batch_norm(Rconv1_2, 64, phase_train)

这看起来是在正确的轨道上吗?我想确保我没有严重误解任何有关Caffe或Tensorflow的事情,因为我对两者都不熟悉。谢谢!

0 个答案:

没有答案