多个对象以某种方式相互干扰

时间:2012-09-04 15:42:37

标签: python oop neural-network

  

可能重复:
  Multiple objects somehow interfering with each other [original version]

(注意:几个小时前我发布了一个类似的问题,但措辞不力并解释,所以这是一个更清晰的重新提交)

您好,

我有一个神经网络(NN),当应用于单个数据集时,它可以很好地工作。但是,如果我想运行NN,例如,一组数据然后创建NN的新实例以在不同的数据集上运行(或者甚至再次使用相同的集合),那么新实例将产生完全不正确的预测

例如,培训XOR模式:

    test=[[0,0],[0,1],[1,0],[1,1]]
    data = [[[0,0], [0]],[[0,1], [0]],[[1,0], [0]],[[1,1], [1]]]

    n = NN(2, 3, 1) # Create a neural network with 2 input, 3 hidden and 1 output nodes
    n.train(data,500,0.5,0) # Train it for 500 iterations with learning rate 0.5 and momentum 0

    prediction = np.zeros((len(test)))
    for row in range(len(test)):
        prediction[row] = n.runNetwork(test[row])[0]

    print prediction

    #
    # Now do the same thing again but with a new instance and new version of the data.
    #

    test2=[[0,0],[0,1],[1,0],[1,1]]
    data2 = [[[0,0], [0]],[[0,1], [0]],[[1,0], [0]],[[1,1], [1]]]

    p = NN(2, 3, 1)
    p.train(data2,500,0.5,0)

    prediction2 = np.zeros((len(test2)))
    for row in range(len(test2)):
        prediction2[row] = p.runNetwork(test2[row])[0]

    print prediction2

将输出:

    [-0.01 -0.   -0.06  0.97]
    [ 0.  0.  1.  1.]

请注意,第一个预测是非常好的,因为第二个是完全错误的,我看不出该类有什么问题:

    import math
    import random
    import itertools
    import numpy as np

    random.seed(0)

    def rand(a, b):
        return (b-a)*random.random() + a

    def sigmoid(x):
        return math.tanh(x)

    def dsigmoid(y):
        return 1.0 - y**2

    class NN:
        def __init__(self, ni, nh, no):
            # number of input, hidden, and output nodes
            self.ni = ni + 1 # +1 for bias node
            self.nh = nh + 1
            self.no = no

            # activations for nodes
            self.ai = [1.0]*self.ni
            self.ah = [1.0]*self.nh
            self.ao = [1.0]*self.no

            # create weights (rows=number of features, columns=number of processing nodes)
            self.wi = np.zeros((self.ni, self.nh))
            self.wo = np.zeros((self.nh, self.no))
            # set them to random vaules
            for i in range(self.ni):
                for j in range(self.nh):
                    self.wi[i][j] = rand(-5, 5)
            for j in range(self.nh):
                for k in range(self.no):
                    self.wo[j][k] = rand(-5, 5)

            # last change in weights for momentum   
            self.ci = np.zeros((self.ni, self.nh))
            self.co = np.zeros((self.nh, self.no))


        def runNetwork(self, inputs):
            if len(inputs) != self.ni-1:
                raise ValueError('wrong number of inputs')

            # input activations
            for i in range(self.ni-1):
                #self.ai[i] = sigmoid(inputs[i])
                self.ai[i] = inputs[i]

            # hidden activations   
            for j in range(self.nh-1):
                sum = 0.0
                for i in range(self.ni):
                    sum = sum + self.ai[i] * self.wi[i][j]
                self.ah[j] = sigmoid(sum)

            # output activations
            for k in range(self.no):
                sum = 0.0
                for j in range(self.nh):
                    sum = sum + self.ah[j] * self.wo[j][k]
                self.ao[k] = sigmoid(sum)

            ao_simplified = [round(a,2) for a in self.ao[:]]
            return ao_simplified  


        def backPropagate(self, targets, N, M):
            if len(targets) != self.no:
                raise ValueError('wrong number of target values')

            # calculate error terms for output
            output_deltas = [0.0] * self.no
            for k in range(self.no):
                error = targets[k]-self.ao[k]
                output_deltas[k] = dsigmoid(self.ao[k]) * error

            # calculate error terms for hidden
            hidden_deltas = [0.0] * self.nh
            for j in range(self.nh):
                error = 0.0
                for k in range(self.no):
                    error = error + output_deltas[k]*self.wo[j][k]
                hidden_deltas[j] = dsigmoid(self.ah[j]) * error

            # update output weights
            for j in range(self.nh):
                for k in range(self.no):
                    change = output_deltas[k]*self.ah[j]
                    self.wo[j][k] = self.wo[j][k] + N*change + M*self.co[j][k]
                    self.co[j][k] = change
                    #print N*change, M*self.co[j][k]

            # update input weights
            for i in range(self.ni):
                for j in range(self.nh):
                    change = hidden_deltas[j]*self.ai[i]
                    self.wi[i][j] = self.wi[i][j] + N*change + M*self.ci[i][j]
                    self.ci[i][j] = change

            # calculate error
            error = 0.0
            for k in range(len(targets)):
                error = error + 0.5*(targets[k]-self.ao[k])**2
            return error

        def train(self, patterns, iterations=1000, N=0.5, M=0.1):
            # N: learning rate
            # M: momentum factor
            for i in range(iterations):
                error = 0.0
                for p in patterns:
                    inputs = p[0]
                    targets = p[1]
                    self.runNetwork(inputs)
                    error = error + self.backPropagate(targets, N, M)
                if i % 100 == 0: # Prints error every 100 iterations
                    print('error %-.5f' % error)

任何帮助将不胜感激!

1 个答案:

答案 0 :(得分:1)

有一件事是清楚可见的,你随机播种到0 - 我不知道你的问题是否足以让你知道这是不是我的意图 - 但这样做总会产生相同的数字系列,一个接一个。这些数字是伪随机的。

当您使用同一类的第二个实例时,由于随机播种位于模块代码中,因此系列不会重置 - 并且您继续使用该系列的pseuo-random数字,而不是相同的数字。这可能会导致第二轮的结果不同 如果我们的算法实际上不使用随机数,但取决于可用的特定系列b random.seed(0)

除此之外,您的代码似乎没有将任何状态保留在实例对象之外,这可能导致您描述的问题。

及时 - 如果你的东西真的应该使用随机数,我建议在模块级别放弃random.seed语句。

否则,如果算法正确且依赖于random.seed(0)之后产生的序列,则只需在开始处理网络的第二个实例之前重新发出此语句