如何正确训练我的神经网络?

时间:2017-11-02 07:20:08

标签: python tensorflow neural-network feed-forward

我的神经网络解决了非线性问题,但测试损失非常高。当我使用没有隐藏层的神经网络时,测试损失低于隐藏层但也高。有人知道为什么吗?以及如何改善损失?

#data

    train_X = data_in[0:9001, :]
    train_Y = data_out[0:9001, :]
    test_X = data_in[9000:10001, :]
    test_Y = data_out[9000:10001, :
    n = train_X.shape[1] 
    m = train_X.shape[0]
    d = train_Y.shape[1]  
    l = test_X.shape[0]

#parameters

    trainX = tf.placeholder(tf.float32, [batch_size, n])
    trainY = tf.placeholder(tf.float32, [batch_size, d])
    testX = tf.placeholder(tf.float32, [l, n])
    testY = tf.placeholder(tf.float32, [l, d])
    def multilayer(trainX, h1, h2, hout, b1, b2, bout):
        layer_1 = tf.matmul(trainX, h1) + b1
        layer_1 = tf.nn.sigmoid(layer_1)
        layer_2 = tf.matmul(layer_1, h2) + b2
        layer_2 = tf.nn.sigmoid(layer_2)
        out_layer = tf.matmul(layer_2, hout) + bout
        return out_layer
    h1 = tf.Variable(tf.zeros([n, n_hidden_1]))
    h2 = tf.Variable(tf.zeros([n_hidden_1, n_hidden_2]))
    hout = tf.Variable(tf.zeros([n_hidden_2, d]))
    b1 = tf.Variable(tf.zeros([n_hidden_1]))
    b2 = tf.Variable(tf.zeros([n_hidden_2]))
    bout = tf.Variable(tf.zeros([d]))
    pred = multilayer(trainX, h1, h2, hout, b1, b2, bout)
    predtest = multilayer(testX, h1, h2, hout, b1, b2, bout)
    loss = tf.reduce_sum(tf.pow(pred - trainY, 2)) / (batch_size)
    losstest = tf.reduce_sum(tf.pow(predtest - testY, 2)) / (l)
    optimizer = 
    tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)

# Initializing the variables

    init = tf.global_variables_initializer()
    a = np.linspace(0, m - batch_size, m / batch_size, dtype=np.int32)
    with tf.Session() as sess:
        sess.run(init)
        for i in (a):
            x = train_X[i:i + batch_size, :]
            y = train_Y[i:i + batch_size, :]
            for epoch in range(training_epochs):
                sess.run(optimizer, feed_dict={trainX: np.asarray(x), trainY: 
                          np.asarray(y)})
                c = sess.run(loss, feed_dict={trainX: np.asarray(x), trainY: 
                            np.asarray(y)})
                print("Batch:", '%04d' % (i / batch_size + 1), "Epoch:", '%04d'%
                      (epoch + 1), "loss=", "{:.9f}".format(c))
# Testing
    print("Testing... (Mean square loss Comparison)")
    testing_loss = sess.run(losstest, feed_dict={testX: np.asarray(test_X), 
    testY: np.asarray(test_Y)})
    pred_y_vals = sess.run(predtest, feed_dict={testX: test_X})
    print("Testing loss=", testing_loss)

1 个答案:

答案 0 :(得分:0)

从我在训练循环中看到的内容,你在迭代批量之前迭代了时代。这意味着您的网络将在同一批次上进行多次(training_epochs次)培训,然后继续进行下一批。它永远不会再出现在以前看过的批次中。

直观地说,我会说你的网络在培训期间看到的最后一批产品严重过度。这解释了测试过程中的高损失。

在训练中颠倒两个循环,你应该没事。