在张量流中编写以下CNN

时间:2018-04-23 16:41:16

标签: tensorflow deep-learning nvidia-digits

我是这个深度学习的新手。我通过阅读和尝试实现一个真实的网络来了解基础知识,看看它是否真的有效。我选择Tensorflow数字和以下网络,因为他们给出了具有训练材料的确切架构。 Steganalysis with DL 我通过查看数字和Tensorflow文档中的网络现有网络,为使用DL的Steganalysis中的体系结构编写了以下代码。

    from model import Tower
from utils import model_property
import tensorflow as tf
import tensorflow.contrib.slim as slim
import utils as digits

class UserModel(Tower):

    @model_property
    def inference(self):
        x = tf.reshape(self.x, shape=[-1, self.input_shape[0], self.input_shape[1], self.input_shape[2]])
        with slim.arg_scope([slim.conv2d, slim.fully_connected],
                            weights_initializer=tf.contrib.layers.xavier_initializer(),
                            weights_regularizer=slim.l2_regularizer(0.0001)):
            conv1 = tf.layers.conv2d(inputs=x, filters=64, kernel_size=7, padding='same', strides=2, activation=tf.nn.relu)
            rnorm1 = tf.nn.local_response_normalization(input=conv1)
            conv2 = tf.layers.conv2d(inputs=rnorm1, filters=16, kernel_size=5, padding='same', strides=1, activation=tf.nn.relu)
            rnorm2 = tf.nn.local_response_normalization(input=conv2) 
            flatten = tf.contrib.layers.flatten(rnorm2)
            fc1 = tf.contrib.layers.fully_connected(inputs=flatten, num_outputs=1000, activation_fn=tf.nn.relu)
            fc2 = tf.contrib.layers.fully_connected(inputs=fc1, num_outputs=1000, activation_fn=tf.nn.relu)
            fc3 = tf.contrib.layers.fully_connected(inputs=fc2, num_outputs=2)
            sm = tf.nn.softmax(fc3)
            return fc3

    @model_property
    def loss(self):
        model = self.inference
        loss = digits.classification_loss(model, self.y)
        accuracy = digits.classification_accuracy(model, self.y)
        self.summaries.append(tf.summary.scalar(accuracy.op.name, accuracy))
        return loss

我尝试过运行它,但准确性非常低。有人能告诉我,如果我完全错了,或者它有什么问题,并告诉我如何正确编码它?

更新:谢谢Nessuno!通过你提到的修复,我提出了这个代码:

from model import Tower
from utils import model_property
import tensorflow as tf
import tensorflow.contrib.slim as slim
import utils as digits

class UserModel(Tower):

    @model_property
    def inference(self):
        x = tf.reshape(self.x, shape=[-1, self.input_shape[0], self.input_shape[1], self.input_shape[2]])
        with slim.arg_scope([slim.conv2d, slim.fully_connected],
                            weights_initializer=tf.contrib.layers.xavier_initializer(),
                            weights_regularizer=slim.l2_regularizer(0.00001)):
            conv1 = tf.layers.conv2d(inputs=x, filters=64, kernel_size=7, padding='Valid', strides=2, activation=tf.nn.relu)
            rnorm1 = tf.nn.local_response_normalization(input=conv1)
            conv2 = tf.layers.conv2d(inputs=rnorm1, filters=16, kernel_size=5, padding='Valid', strides=1, activation=tf.nn.relu)
            rnorm2 = tf.nn.local_response_normalization(input=conv2) 
            flatten = tf.contrib.layers.flatten(rnorm2)
            fc1 = tf.contrib.layers.fully_connected(inputs=flatten, num_outputs=1000, activation_fn=tf.nn.relu)
            fc2 = tf.contrib.layers.fully_connected(inputs=fc1, num_outputs=1000, activation_fn=tf.nn.relu)
            fc3 = tf.contrib.layers.fully_connected(inputs=fc2, num_outputs=2, activation_fn=None)
            return fc3

    @model_property
    def loss(self):
        model = self.inference
        loss = digits.classification_loss(model, self.y)
        accuracy = digits.classification_accuracy(model, self.y)
        self.summaries.append(tf.summary.scalar(accuracy.op.name, accuracy))
        return loss

求解器类型为SGD。学习率为0.001。我正在改变训练数据。我将训练数据增加到6000(每个类别3000个,其中20%用于验证)。我从this link下载了培训数据。但我只得到以下图表。我认为这是过度拟合的。您有什么建议来提高验证准确性吗?

Graph

1 个答案:

答案 0 :(得分:1)

在NVIDIA数字中,classification_loss与tensorflow tf.nn.softmax_cross_entropy_with_logits完全一样,需要输入一个线性神经元层。

相反,您将作为输入sm = tf.nn.softmax(fc3)传递,因此您将应用softmax操作2次,这是您精确度低的原因。

要解决此问题,只需将模型输出图层更改为

即可
fc3 = slim.fully_connected(fc2, 2, activation_fn=None, scope='fc3')
return fc3