python(3.23)实现反向传播,列表上的类型错误

时间:2013-05-05 06:57:19

标签: python neural-network typeerror backpropagation

当我尝试在神经网络上运行我的反向传播时,我遇到类型错误,并尝试将其训练为'和'模式。

请注意,我不是要求任何人阅读或查看我的代码..

我只是给了它一堆,因为我不确定是什么导致了这个错误。

我在backprop函数中包含了一堆打印,因为我一直在测试它。

我的来源完整发布在:my github

这是命令行中显示的内容:

$python main.py
enter a filename: params.dat
max_iterations: 100, error_threshhold: 0.001000, netError: 1.001000, n_iterations: 0
eval of while loop: True
1backProp iteration = 0, netError = 1.001000
2backProp iteration = 0, netError = 1.001000, inputsForWeightChangeLoop:
[0, 1]
3backProp iteration = 0, netError = 1.001000, inputsForWeightChangeLoop:
[0, 1]
4backProp iteration = 0, netError = 1.001000, inputsForWeightChangeLoop:
[]
5backProp, oldInputsWeightChange:
[0, 1]
6backProp, inputNN.layers[j].neurons[k]:
<neuralNet.neuron object at 0x7f405e97be90>
8backProp, y(stuff):
0.7615941559557649
9backProp, y(stuff):
0.7615941559557649
Traceback (most recent call last):
  File "main.py", line 51, in <module>
    if __name__ == "__main__": main()
  File "main.py", line 41, in main
    backProp(inputNeuralNet, dStruct['input'], dStruct['target'], dStruct['max_iterations'], dStruct['error_threshhold'], dStruct['rateOfLearning'])
  File "/home/nab/Documents/cpat_project-master/propagate.py", line 66, in backProp
    inputsForWeightChangeLoop.append(float(y(oldInputsWeightChange, inputNN.layers[j].neurons[k])))
TypeError: 'list' object is not callable

基本上,它给出了一个类型错误,我试图收集一层的输出,这样在我的体重变化循环的下一次迭代中,我可以计算神经元的误差值,然后改变权重。 / p>

我的问题,基本上是如何计算这些神经元的输出而不会出现这种类型错误。

这是反向传播代码:

"""
backProp takes a neural network (inputNN), a set of input training values (input),
a number of maximum allowed iterations (max_iterations), and a threshold for the
calculated error values, this last value is used as a way to tell when the network
has been sufficiently trained. back propagation is an algorithm for training a
neural network.
"""

def backProp(inputNN, input, targets, max_iterations, error_threshhold, learningRate):
    n_iterations = 0 # counter for the number of propagation loops
    netError = float(error_threshhold + 1.0)
    print('max_iterations: %d, error_threshhold: %f, netError: %f, n_iterations: %d' % (max_iterations, error_threshhold, netError, n_iterations))
    print('eval of while loop: %s' % (n_iterations < max_iterations and netError > error_threshhold))
    while ((n_iterations < max_iterations) and (netError > error_threshhold)):
        print('1backProp iteration = %d, netError = %f' % (n_iterations, netError))
        for i in input:
            y = inputNN.update(i) # present the pattern to the network
            outputLayerError = errorGradientOutputLayer(sum(y), targets[n_iterations]) #calc the error signal, assumes that output layer has only 1 node.
            newWeights = [] # to collect new weights for updating the neurons
            inputsForWeightChangeLoop = i # this is actually to collect outputs for computing the weight change in hidden layers, which are then used as inputs
            print('2backProp iteration = %d, netError = %f, inputsForWeightChangeLoop:' % (n_iterations, netError))
            print(inputsForWeightChangeLoop)
            counter = 0 # used for a condition to compute the error value in the hidden layer above the output layer.
            layersFromOut = list(range(0, inputNN.n_hiddenLayers + 1)) # this is in order to get the reverse of a list to do a backwards propagation,  + 1 for input layer
            layersFromOut.reverse() # reverses the list
            error2DArray = [] # this collects error values for use in the change of the weights
            for j in layersFromOut: # for every layer, starting with the hidden layer closest to output.
                for k in range(0, inputNN.layers[j].n_neurons): # for every neuron in the layer
                    if counter != 0: # if the neuron isn't in the hidden layer above the output
                        error2DArray.append(errorGradientHiddenLayer(k, j, inputNN, error2DArray[j + 1]))  # compute the error gradient for the neuron
                    else:
                        error2DArray.append(errorGradientHiddenLayer(k, j, inputNN, [outputLayerError])) # '' same but for the hidden layer above the output layer
                counter += 1
            for j in range(0, inputNN.n_hiddenLayers + 2): # for every layer, + 2 in range for output and input layers.
                for k in range(0, inputNN.layers[j].n_neurons): # for every neuron in the layer
                    newWeights = []
                    for h in range(0, inputNN.layers[j].neurons[k].n_inputs): #for every weight in the neuron
#params for deltaWeight -- deltaWeight(float oldWeight, float learningRate, list[float] inputsToNeuron, list[float] errorValues, float derivitiveOfActivationFn)
                        newWeights.append(deltaWeight(inputNN.layers[j].neurons[k].l_weights[h], learningRate, inputsForWeightChangeLoop[h], error2DArray[j], derivActivation(inputsForWeightChangeLoop, inputNN.layers[j].neurons[k]))) # get the change in weight
                    inputNN.layers[j].neurons[k].putWeights(newWeights) #update the weights
                print('3backProp iteration = %d, netError = %f, inputsForWeightChangeLoop:' % (n_iterations, netError))
                print(inputsForWeightChangeLoop)
                oldInputsWeightChange = inputsForWeightChangeLoop # this is used to calculate the new inputs for the change in weight
                inputsForWeightChangeLoop = [] # clear it to re-populate
                for k in range(0, inputNN.layers[j].n_neurons): # for every neuron in the layer
                    print('4backProp iteration = %d, netError = %f, inputsForWeightChangeLoop:' % (n_iterations, netError))
                    print(inputsForWeightChangeLoop)
                    print('5backProp, oldInputsWeightChange:')
                    print(oldInputsWeightChange)
                    print('6backProp, inputNN.layers[j].neurons[k]:')
                    print(inputNN.layers[j].neurons[k])
                    print('8backProp, y(stuff):')
                    print(float(math.e**activation(oldInputsWeightChange, inputNN.layers[j].neurons[k]) - math.e**((-1) * activation(oldInputsWeightChange, inputNN.layers[j].neurons[k])))/float(math.e**activation(oldInputsWeightChange, inputNN.layers[j].neurons[k]) + math.e**((-1) * activation(oldInputsWeightChange, inputNN.layers[j].neurons[k]))))
                    print('9backProp, y(stuff):')
                    print(sigmoid(activation(oldInputsWeightChange, inputNN.layers[j].neurons[k])))
                    #print('7backProp, y(stuff):')
                    #print(y(oldInputsWeightChange, inputNN.layers[j].neurons[k]))
                    inputsForWeightChangeLoop.append(float(y(oldInputsWeightChange, inputNN.layers[j].neurons[k])))
                    #inputsForWeightChangeLoop.append(y(oldInputsWeightChange, inputNN.layers[j].neurons[k])) # calculate the new inputs
            n_iterations += 1
            errorVal = 0# sum unit for the net error
            for j in range(0, len(input)): # for every pattern in the training set
                for k in range(0, len(inputNN.layers[-1].n_neurons)): # for every output to the net
                    errorVal += errorSignal(targets[k], y[k])
            netError = .5  *  errorVal #calc the error fn for the net?
            print('5backProp iteration = %d, netError = %f' % (n_iterations, netError))
        #
    print('propagate finished with %d iterations and %f net error' % (n_iterations, netError))
    return

这是我的函数y,它可能是一种表达节点输出的复杂方式:

"""
y takes a set of patterns or inputs (p), and a neuron (n) and returns the 
output for the specified node in the neural net. [keep in mind that the
input of some neuron is really in terms of the layer above it.]
"""
def y(p, n):
    if (len(p) != n.n_inputs): # if the node has a different number of inputs than specified in params, throw error.
        raise ValueError('wrong number of inputs: y(p, n) in propagate.')
    return sigmoid(activation(p, n))

和我的sigmoid:

"""
sigmoid takes an activation value (activation) and calculates the sigmoid 
function on the activation value. [here I use the tanh function]
"""
def sigmoid(activation):
    return float(math.e**activation - math.e**((-1) * activation))/float(math.e**activation + math.e**((-1) * activation))

最后,我的激活:

"""
activation takes a neuron (n) and a set of patterns or inputs (p) and returns
the activation value of the neuron on that input pattern.
"""
def activation(p, n):
    activationValue = 0
    for i in range(0, len(p)):
        activationValue += p[i] * n.l_weights[i]
    activationValue += (-1) * n.l_weights[-1] # threshhold?
    return activationValue

我不确定我需要多少代码,所以我将继续下面包含整个神经网络模块。

"""
  neuralNet.py
  4/21/13, 5:30p

"""

import sys
import random
import math
import propagate


class neuron():
    n_inputs = 0
    l_weights = []

    def __init__(self, numberOfInputs):
        self.l_weights = []
        self.n_inputs = numberOfInputs
        for i in range(0,(numberOfInputs + 1)): #for each input + threshhold
            self.l_weights.append(random.randint(-1,1))

    #
    def putWeights(self, weights):
        for i in range(0, len(weights)):
            self.l_weights[i] = weights[i]

class neuralNetLayer():
    n_neurons = 0
    neurons = []

    def __init__(self, numNeurons, numInputsPerNeuron):
        self.neurons = []
        self.n_neurons = numNeurons
        for i in range(0, numNeurons):
            #print('neuralNetLayer -> length of self.neurons: %d' % len(self.neurons))
            #print("neural net layer makes a neuron -> %d" % i)
            self.neurons.append(neuron(numInputsPerNeuron))

    def getWeights(self):
        weights = []
        for i in range(0, self.n_neurons):
            i_weights = []
            for j in range(0, len(self.neurons[i].l_weights)):
                i_weights.append(self.neurons[i].l_weights[j])
            weights.append(i_weights)
        return weights

class neuralNet():
    n_inputs = 0
    n_outputs = 0
    n_hiddenLayers = 0
    n_neuronsPerHiddenLyr = 0
    layers = []

    def __init__(self, numInputs, numOutputs, numHidden, numNeuronsPerHidden):
        self.layers = []
        self.n_inputs = numInputs
        self.n_outputs = numOutputs
        self.n_hiddenLayers = numHidden
        self.n_neuronsPerHiddenLyr = numNeuronsPerHidden
        #print('making input layer with %d neurons and %d inputs to the neurons' % (numInputs, numInputs))
        self.layers.append(neuralNetLayer(numInputs, numInputs))# make input layer
        for i in range(0, self.n_hiddenLayers):
            #print('making hidden layer with %d neurons and %d inputs to the neurons' % (numNeuronsPerHidden, numNeuronsPerHidden))
            self.layers.append(neuralNetLayer(numNeuronsPerHidden, numNeuronsPerHidden))# make hidden layers
        if numHidden > 0: # if you have hidden neurons, output will connect to them
            #print('making output layer with %d neurons and %d inputs to the neurons' % (numOutputs, numNeuronsPerHidden))
            self.layers.append(neuralNetLayer(numOutputs, numNeuronsPerHidden))
        else:
            #print('making output layer with %d neurons and %d inputs to the neurons' % (numOutputs, numInputs))
            self.layers.append(neuralNetLayer(numOutputs, numInputs))# make output layer connect to input layer

    #returns a list of the weights in the net
    def getWeights(self):
        weights = []
        for i in range(0, self.n_hiddenLayers + 1): #+ 1 because output layer
            for j in range(0, self.layers[i].n_neurons + 1):
                for k in range(0, self.layers[i].neurons[j].n_inputs + 1):
                    weights.append(self.layers[i].neurons[j].l_weights[k])
        return weights

    #replaces the weights in the net with the given values
    def putWeights(self, weights):
        counter = 0
        for i in range(0, self.n_hiddenLayers + 1):
            for j in range(0, self.layers[i].n_neurons + 1):
                self.layers[i].neurons[j].putweights(weights[i][j])

    #returns the number of weights in the net
    def getNumWeights(self):
        num = 0
        for i in range(0, self.n_hiddenLayers + 1):
            for j in range(0, self.layers[i].n_neurons):
                for k in range(self.layers[i].neurons[j].n_inputs + 1):
                    num += 1
        return num

    # given some inputs, returns the output of the net
    def update(self, inputs):
        if (len(inputs) != self.n_inputs):
            raise ValueError('wrong number of inputs: update() in neuralNet.')
        for i in range(0, self.n_hiddenLayers + 1): # I need to do this for every hidden layer + input layer.
            outputs = []
            for j in range(0, self.layers[i].n_neurons):
                if i != 0:# if current layer is not input layer
                    outputs.append(propagate.y(outputPriorLayer, self.layers[i].neurons[j]))
                else:
                    outputs.append(propagate.y(inputs, self.layers[i].neurons[j]))
            outputPriorLayer = outputs
        return outputs[0:len(self.layers[-1].neurons)]

1 个答案:

答案 0 :(得分:1)

您在该方法中定义了另一个y变量:

y = inputNN.update(i) # present the pattern to the network

我没有仔细查看源代码,但似乎有时只设置此变量,这样可以让代码在很短的时间内运行。您必须选择与y函数不冲突的其他名称。