在每个时期结束时增加成本值

时间:2018-07-02 15:49:11

标签: tensorflow neural-network deep-learning mnist cross-entropy

我是TensorFlow的新手,我想尝试MNIST数据集。

这是我的代码,但是由于某种原因,每次迭代都会增加划时代的成本。我尝试更改学习率,层数和神经元,但是趋势一直在上升。

如果有人可以帮助我,那就太好了。

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets('/tmp/data/',one_hot = True)

def NN(x):
    layer1 = 10
    layer2 = 10
    inps = 28*28
    outs = 10

    w1 = tf.Variable(np.random.randn(layer1, inps))
    w2 = tf.Variable(np.random.randn(layer2, layer1))
    w3 = tf.Variable(np.random.randn(outs, layer2))

    l1 = tf.matmul(w1,x)
    l1 = tf.nn.relu(l1)

    l2 = tf.matmul(w2,l1)
    l2 = tf.nn.relu(l2)

    l3 = tf.matmul(w3, l2)

    return l3


x = tf.placeholder(tf.float64, [28*28, None])
y = tf.placeholder(tf.int64, [10, None])
predic = NN(x)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits = predic,labels = y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

batch_size = 512
epoch = 5

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for e in range(epoch):
        e_cost = 0
        for b in range(0,int(mnist.train.num_examples/batch_size)):
            x1, y1 = mnist.train.next_batch(batch_size)
            c,_ = sess.run([cost, optimizer], feed_dict = {x: x1.T, y: y1.T})
            e_cost += c
        print("Epoch Cost: ", e_cost)

输出看起来像这样

Epoch Cost:  485846.36608997884
Epoch Cost:  1133384.4635202957
Epoch Cost:  3738400.689635882
Epoch Cost:  9999002.612394715
Epoch Cost:  22214906.41488508

1 个答案:

答案 0 :(得分:1)

我知道了。

功能:

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits = predic,labels = y))

需要logits和标签为矩阵,其形状为:(batch_size,num_outputs)。我必须对矩阵进行转置才能获得正确的结果。

更正的功能:

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits = tf.transpose(predic), labels = tf.transpose(y)))