TensorFlow示例但具有中间层

时间:2017-07-07 16:43:46

标签: python tensorflow mnist

我想让这段代码正常运行。它可能看起来不像,但它主要来自TensorFlow mnist示例。我试图获得三层,但我已经改变了输入和输出大小。输入大小为12,中间大小为6,输出大小为2.这是我运行时发生的情况。它不会抛出错误,但是当我运行测试选项时,我总是得到50%。当我回到训练时它会运行,我确信重量正在变化。有保存模型和重量的代码,所以我非常有信心每次重新启动它都不会消除我的重量。 self.d_y_out背后的想法是让我能够运行模型并获得仅一个图像的输出。我认为这个问题接近评论说“问题??”。

        self.d_keep = tf.placeholder(tf.float32)
        self.d_W_2 = tf.Variable(tf.random_normal([mid_num, output_num], stddev=0.0001))
        self.d_b_2 = tf.Variable(tf.random_normal([output_num], stddev=0.5))

        self.d_x = tf.placeholder(tf.float32, [None, input_num])
        self.d_W_1 = tf.Variable(tf.random_normal([input_num, mid_num], stddev=0.0001))  # 0.0004
        self.d_b_1 = tf.Variable(tf.zeros([mid_num]))

        self.d_y_ = tf.placeholder(tf.float32, [None, output_num])

        self.d_x_drop = tf.nn.dropout(self.d_x, self.d_keep)

        self.d_y_logits_1 = tf.matmul(self.d_x_drop, self.d_W_1) + self.d_b_1
        self.d_y_mid = tf.nn.relu(self.d_y_logits_1) 
        self.d_y_mid_drop = tf.nn.dropout(self.d_y_mid, self.d_keep)

        self.d_y_logits_2 = tf.matmul(self.d_y_mid_drop, self.d_W_2) + self.d_b_2

        self.d_y_softmax = tf.nn.softmax_cross_entropy_with_logits(logits=self.d_y_logits_2, labels=self.d_y_)

        self.d_cross_entropy = tf.reduce_mean(self.d_y_softmax) ## PROBLEM??

        self.d_train_step = tf.train.GradientDescentOptimizer(0.001).minimize(self.d_cross_entropy)  # 0.0001

        # train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #0.5

        #self.d_y_out = tf.argmax(self.d_y, 1)  ## for prediction
        self.d_y_out = tf.argmax(self.d_y_logits_2, 1, name="d_y_out")

    if self.train :

        for i in range(self.start_train,self.cursor_tot): #1000
            batch_xs, batch_ys = self.get_nn_next_train(self.batchsize)
            self.sess.run(self.d_train_step, feed_dict={self.d_x: batch_xs, self.d_y_: batch_ys, self.d_keep: 0.5})
            if True: #mid_num > 0:
                cost = self.sess.run([self.d_cross_entropy, self.d_train_step], 
                    feed_dict={self.d_x: batch_xs, self.d_y_: batch_ys, self.d_keep: 0.5})
                print cost[0], "cost"


    if self.test :
        d_correct_prediction = tf.equal(self.d_y_out, tf.argmax(self.d_y_,1))
        #d_correct_prediction = tf.equal(tf.argmax(self.d_y , 1), tf.argmax(self.d_y_, 1))

        d_accuracy = tf.reduce_mean(tf.cast(d_correct_prediction, tf.float32))

        if self.use_loader : self.get_nn_next_test(self.batchsize)
        print(self.sess.run([d_accuracy, self.d_cross_entropy], 
            feed_dict={self.d_x: self.mnist_test.images, self.d_y_: self.mnist_test.labels, self.d_keep: 1.0}))

    if self.predict_dot :
        for i in range(start, stop ) :
            batch_0, batch_1 = self.get_nn_next_predict(self.batchsize)
            if len(batch_0) > 0 :
                out.extend( self.sess.run([self.d_y_out, self.d_cross_entropy], 
                    feed_dict={self.d_x : batch_0, self.d_y_: batch_1, self.d_keep: 1.0})[0])
                print "out" , len(out) , i, self.cursor_tot, out[:10],"..."

编辑我已经在这个问题中编辑了很多代码。非常感谢vijay m让我走到这一步。任何帮助,将不胜感激。感谢。

1 个答案:

答案 0 :(得分:0)

此代码中的问题是您在输入上调用\Library\Webserver。你的是一个单层网络,你不需要dropout。并使用像dropout这样的动量优化器来加快训练速度。我所做的改变:

Adam