保存和恢复基于CNN的去噪网络Tensorflow

时间:2019-02-02 17:20:11

标签: python tensorflow conv-neural-network

我的问题是关于恢复降噪训练模型。 我通过以下方式定义了网络。

Conv1-> relu1-> Conv2-> relu2-> Conv3-> relu3-> Deconv1

tf.variable_scope(name)与上面相同。

现在我用tf.name_scope定义了损失优化器准确性

当我尝试恢复损失功能时,它甚至会要求提供 labels (我没有)。

feed_dict={x:input, y:labels}
sess.run('loss',feed_dict)

任何人都可以帮助我了解如何进行测试吗?我应该还原哪个操作?

我必须调用所有层,传递输入并检查损耗(MSE)吗?

我检查了许多示例,但这似乎都是分类问题,最后一次使用logit定义了softmax。

编辑: 下面是我的代码,现在可以很容易地看到如何定义tf.name_scope和tf.variable_scope。我觉得我可能需要整层来测试新的Image。是吗?

def new_conv_layer(input, num_input_channels, filter_size, num_filters, name):

with tf.variable_scope(name):
    # Shape of the filter-weights for the convolution
shape = [filter_size, filter_size, num_input_channels, num_filters]


    # Create new weights (filters) with the given shape
    weights = tf.Variable(tf.truncated_normal([filter_size, filter_size, num_input_channels, num_filters], stddev=0.5))


    # Create new biases, one for each filter
    biases = tf.Variable(tf.constant(0.05, shape=[num_filters]))

    filters = tf.Variable(tf.truncated_normal([filter_size, filter_size, num_input_channels, num_filters], stddev=0.5))


    # TensorFlow operation for convolution
    layer = tf.nn.conv2d(input=input, filter=filters, strides=[1,1,1,1], padding='SAME')

    # Add the biases to the results of the convolution.
    layer += biases

    return layer, weights

def new_relu_layer(input, name):

 with tf.variable_scope(name):
    #TensorFlow operation for convolution
    layer = tf.nn.relu(input)

    return layer
def new_pool_layer(input, name):

 with tf.variable_scope(name):


    # TensorFlow operation for convolution
    layer = tf.nn.max_pool(value=input, ksize=[1, 1, 1, 1], strides=[1, 1, 1, 1], padding='SAME')
    return layer 

def new_layer(inputs, filters,kernel_size,strides,padding, name):

 with tf.variable_scope(name):

    layer = tf.layers.conv2d_transpose(inputs=inputs, filters=filters , kernel_size=kernel_size, strides=strides, padding=padding,   data_format =  'channels_last')

    return layer







layer_conv1, weights_conv1 = new_conv_layer(input=yTraininginput, num_input_channels=1, filter_size=5, num_filters=32, name ="conv1")
layer_relu1 = new_relu_layer(layer_conv1, name="relu1")

layer_conv2, weights_conv2 = new_conv_layer(input=layer_relu1, num_input_channels=32, filter_size=5, num_filters=64, name ="conv2")
layer_relu2 = new_relu_layer(layer_conv2, name="relu2")


layer_conv3, weights_conv3 = new_conv_layer(input=layer_relu2, num_input_channels=64, filter_size=5, num_filters=128, name ="conv3")
layer_relu3 = new_relu_layer(layer_conv3, name="relu3")


layer_deconv1 = new_layer(inputs=layer_relu3, filters=1,  kernel_size=[5,5] ,strides=[1,1] ,padding='same',name = 'deconv1')
layer_relu4 = new_relu_layer(layer_deconv1, name="relu4")


layer_conv4, weights_conv4 = new_conv_layer(input=layer_relu4, num_input_channels=1, filter_size=5, num_filters=128, name ="conv4")
layer_relu5 = new_relu_layer(layer_conv4, name="relu5")


layer_deconv2 = new_layer(inputs=layer_relu5, filters=1,  kernel_size=[5,5] ,strides=[1,1] ,padding='same',name = 'deconv2')
layer_relu6 = new_relu_layer(layer_deconv2, name="relu6")





# Use Cross entropy cost function
with tf.name_scope("loss"):
    cross_entropy = tf.losses.mean_squared_error(labels = xTraininglabel,predictions = layer_relu6)


# Use Adam Optimizer
with tf.name_scope("optimizer"):
    optimizer = tf.train.AdamOptimizer(learning_rate=1e-6).minimize(loss = cross_entropy)


# Accuracy
with tf.name_scope("accuracy"):
    accuracy = tf.image.psnr(a=layer_relu6,b=xTraininglabel,max_val=1.0)

1 个答案:

答案 0 :(得分:0)

尝试在张量板上查看代码图,从最后一层(在您的情况下为deconv4)获取操作名称。如下图所示。 尝试使用以下代码加载张量:

operation = graph.get_tensor_by_name("<operationname:0>")

这应该可行,因为您的图层是相互连接的。

让我知道这是否有效!

Operation Image