函数逼近Tensorflow

时间:2018-02-12 19:52:37

标签: python tensorflow neural-network mathematical-optimization approximation

我正在尝试在Tensorflow中创建一个近似于正弦函数的神经网络。我已经找到了一些通用函数逼近器的例子,但我并没有完全理解代码,因为我对Tensorflow很新,我想自己编写代码来理解每一步。

这是我的代码:

import tensorflow as tf
import numpy as np
import math, random
import matplotlib.pyplot as plt


# Create the arrays x and y that contains the inputs and the outputs of the function to approximate
x = np.arange(0, 2*np.pi, 2*np.pi/1000).reshape((1000,1))
y = np.sin(x)
# plt.plot(x,y)
# plt.show()

# Define the number of nodes
n_nodes_hl1 = 100
n_nodes_hl2 = 100

# Define the number of outputs and the learn rate
n_classes = 1
learn_rate = 0.1

# Define input / output placeholders
x_ph = tf.placeholder('float', [None, 1])
y_ph = tf.placeholder('float')


# Routine to compute the neural network (2 hidden layers)
def neural_network_model(data):
    hidden_1_layer = {'weights': tf.Variable(tf.random_normal([1, n_nodes_hl1])),
                      'biases': tf.Variable(tf.random_normal([n_nodes_hl1]))}

    hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
                      'biases': tf.Variable(tf.random_normal([n_nodes_hl2]))}

    output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, n_classes])),
                      'biases': tf.Variable(tf.random_normal([n_classes]))}


    # (input_data * weights) + biases
    l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])
    l1 = tf.nn.relu(l1)

    l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])
    l2 = tf.nn.relu(l2)

    output = tf.add(tf.matmul(l2, output_layer['weights']), output_layer['biases'])

    return output


# Routine to train the neural network
def train_neural_network(x_ph):
    prediction = neural_network_model(x_ph)
    cost = tf.reduce_mean(tf.square(prediction - y_ph))
    optimizer = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)

    # cycles feed forward + backprop
    hm_epochs = 10

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        # Train in each epoch with the whole data
        for epoch in range(hm_epochs):
            epoch_loss = 0
            _, c = sess.run([optimizer, cost], feed_dict = {x_ph: x, y_ph: y})
            epoch_loss += c
            print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)

        correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y_ph, 1))
        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
        print('Accuracy;', accuracy.eval({x_ph: x, y_ph: x}))


# Train network
train_neural_network(x_ph)

如果您运行该程序,您将看到损失如何分歧,我不知道为什么它会像那样。有谁可以帮助我?

谢谢!

2 个答案:

答案 0 :(得分:11)

@AIdream一般都是关于public class BoundedCounter { private int value; private int upperLimit; public BoundedCounter(int Limit) { upperLimit = Limit; } public void next(){ if (this.value <= upperLimit) { this.value+=1; } this.value = 0; } public String toString() { return "" + this.value; } } 的。但即使使用initial learning rate convergence issuelean_rate=1.0e-9,错误仍然很大意味着该问题不是其他问题。

调试问题

运行上面的代码,给出:

10000 epochs

以上代码尝试近似范围Epoch 0 completed out of 10 loss: 61437.30859375 Epoch 1 completed out of 10 loss: 1.2855042406744022e+21 Epoch 2 completed out of 10 loss: inf Epoch 3 completed out of 10 loss: nan 内的sin function。由于标签(输出)将为(0, 2*pi),因此较高的错误表示为权重初始化的值较大。将权重更改为具有较小的初始值((-1,1)),会导致:

stddev=0.01

损失收敛得非常快,但检查预测似乎输入都被映射到零。

enter image description here

问题是因为上面代码中的输入是Epoch 0 completed out of 10 loss: 0.5000443458557129 Epoch 1 completed out of 10 loss: 0.4999848008155823 Epoch 2 completed out of 10 loss: 0.49993154406547546 Epoch 3 completed out of 10 loss: 0.4998819828033447 而不是single batchmini batches可能导致当地的最小问题,一旦达到当地最低限度,就无法解决问题。 Batch gradient decent避免了这个问题,因为在批次上计算的渐变是嘈杂的,可以让你超出局部最小值。随着这些变化导致:

Mini batch

enter image description here

通过从here下载源代码,可以重现上述步骤。

答案 1 :(得分:4)

你对梯度下降的初始学习速度太大而无法收敛到最小值(例如,参见关于梯度下降和学习速率值的其他线索:"Gradient descent explodes if learning rate is too large")。

只需将其值替换为例如learn_rate = 1.0e-9此处和您的网络将汇合。

跟踪:

Epoch 0 completed out of 10000 loss: 8512.4736328125
Epoch 1 completed out of 10000 loss: 8508.4677734375
...
Epoch 201 completed out of 10000 loss: 7743.56396484375
Epoch 202 completed out of 10000 loss: 7739.92431640625
...
Epoch 7000 completed out of 10000 loss: 382.22601318359375
Epoch 7001 completed out of 10000 loss: 382.08026123046875
...
Epoch 9998 completed out of 10000 loss: 147.459716796875
Epoch 9999 completed out of 10000 loss: 147.4239501953125
...