Python - 神经网络近似球函数

时间:2016-10-28 00:51:25

标签: python machine-learning neural-network normalization backpropagation

我是神经网络的新手,我正在使用我在网上找到的示例神经网络,尝试使用反向传播来近似球函数(添加一组数字的平方)。

初始代码是:

class NeuralNetwork():
    def __init__(self):
        #Seed the random number generator, so it generates the same numbers
        #every time the program is run.
        #random.seed(1)

        #Model a single neuron, with 3 input connections and 1 output connection.
        #We assign random weights to a 3 x 1 matrix, with values in the range -1 to 1
        #and mean 0
        self.synaptic_weights = 2 * random.random((2,1)) - 1

    #The Sigmoid function, which describes an S shaped curve.
    #We pass the weighted sum of thle inputs through this function to
    #normalise them between 0 and 1.
    def __sigmoid(self, x):
        return 1 / (1 + exp(-x))


    #The derivative of the sigmoid function
    #This is the gradient of the sigmoid curve.
    #It indicates how confident we are about existing weight.
    def __sigmoid_derivative(self, x):
        return x * (1 -x)


    #Train the network through a process of trial and error.
    #Adjusting the synaptic weights each time.
    def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
        for iteration in xrange(10000):
            #Pass the training set through our neural network(a single neuron)
            output = self.think(training_set_inputs)

            #Calculate the error(Difference between the desired output and predicted output).
            error = training_set_outputs - output


            #Multiply the error by the input and again by the gradient of the Sigmoid curve.
            #This means less confident weights are adjusted more.
            #This means inputs, which are zero, do not cause changes to the weights.
            adjustment = dot(training_set_inputs.T, error * self.__sigmoid_derivative(output))

            #Adjust the weights
            self.synaptic_weights += adjustment

    #The neural network thinks.
    def think(self, inputs):
        #Pass inputs through our neural network(OUR SINGLE NEURON).
        return self.__sigmoid(dot(inputs, self.synaptic_weights))


if __name__ == "__main__":

    #Initialise a single neuron neural network.
    neural_network = NeuralNetwork()

    print"Random starting synaptic weights: "
    print neural_network.synaptic_weights

    #The training set. We have 4 examples, each consisting of 3 input values and 1 output value

    training_set_inputs = array([[0, 1], [1,0], [0,0]])
    training_set_outputs = array([[1,1,0]]).T

    #Train the neural network using a training set.
    #Do it 10,000 times and make small adjustments each time.
    neural_network.train(training_set_inputs, training_set_outputs, 10000)

    print "New synaptic weights after training: "
    print neural_network.synaptic_weights

    #Test the neural network with a new situation.
    print "Considering new situation [1,1] -> ?: "
    print neural_network.think(array([1,1]))

我的目标是将训练数据(球体功能输入和输出)输入神经网络以训练它并有意义地调整权重。在连续训练之后,权重应该达到从训练输入给出合理准确结果的点。

我想象一下球体函数的一些训练集的例子如下:

training_set_inputs = array([[2,1], [3,2], [4,6], [8,3]])
training_set_outputs = array([[5, 13, 52, 73]])

我在网上找到的例子可以成功地逼近XOR运算,但是当给定球函数输入时,它只在一个新例子上测试时给出一个输出1(例如,[6,7]理想情况下应返回近似值大约85)

从我所读到的关于神经网络的内容我怀疑这是因为我需要对输入进行标准化,但我不完全确定如何做到这一点。对此有任何帮助或指出我在正确的轨道上的东西将非常感谢,谢谢。

0 个答案:

没有答案