Tensorflow中的通道优先卷积层

时间:2018-08-25 13:11:46

标签: python tensorflow deep-learning conv-neural-network

要为正在构建的另一个系统的运行情况生成测试,我试图在Tensorflow中构建一个1层ConvNet,然后提取权重,偏差和输出。

我的系统以通道优先方式运行,因此我也希望Tensorflow图也以这种方式运行。但是,由于Tensorflow默认为channel-last,因此我的图形无法正常工作。

我有一些使用numpy生成的随机输入数据。

import numpy as np

batch_size = 1
image_size = 5
image_channel = 3
shape = (batch_size, image_channel, image_size, image_size)
inputs = np.float32(np.random.random_sample(shape))
np.set_printoptions(formatter={'float,': '{: 0.b3f}'.format})
print(np.array2string(inputs, separator=', '))

我由此定义了我的网络结构:

def network(x, mode_name):
    num_filters = 4
    filter_size = 2
    input_channels = 3
    stride = 1
    conv1 = conv_layer(x, conv_size=[filter_size, filter_size, input_channels, num_filters], 
                       stride_size=[1, 1, stride, stride], name=mode_name + "_conv1")
    return conv1

def conv_layer(prev_layer, conv_size, stride_size, name):
    W_initer =  tf.random_uniform_initializer(dtype=tf.float32)
    W = tf.get_variable(name + '_W', dtype=tf.float32, shape=conv_size,
                    initializer=W_initer)

    bias_init = tf.constant(np.float32(np.random.random_sample(conv_size[3])))
    b = tf.get_variable(name + '_b', dtype=tf.float32, initializer=bias_init)
    return tf.nn.conv2d(prev_layer, W, stride_size, data_format='NCHW', padding='VALID') + b

我的Tensorflow图:

graph = tf.Graph()

with graph.as_default():
    tf.set_random_seed(1)

    with tf.variable_scope("simple_cnn") as scope:
        outputs = network(inputs, "simple_cnn")

    with tf.name_scope('init'):
         init_op = tf.global_variables_initializer()

然后我想我会像这样提取张量:

with tf.Session(graph=graph) as sess:
    sess.run(init_op)
    kernels = tf.trainable_variables()[0]
    biases = tf.trainable_variables()[1]
    print("kernels:")
    print(sess.run(kernels))
    print("\nbiases:")
    print(sess.run(biases))
    print("\noutputs")
    sess.run(outputs)

不幸的是,此最终操作失败,并显示错误Generic conv implementation only supports NHWC tensor format for now.,似乎是this used to be an issue with Tensorflow,但是我不确定它是否仍然适用。有谁知道让它在CPU上运行的方法?

或者,假设我像这样重塑张量,则将网络以通道最后一个模式运行,输入数据的形状为[batch_size, image_size, image_size, image_channel]仍然可以为我提供有效的测试。

print("inputs")
inputs_chan_first = np.rollaxis(inputs, 1, 3)  
print(inputs_chan_first)

print("\noutputs")
outputs_chan_first = np.rollaxis(outputs, 1, 3) 
print(outputs_chan_first)

让我的偏见和内核张量保持不变吗?

0 个答案:

没有答案