在神经网络中实现QuickProp

时间:2019-10-26 08:10:43

标签: python-3.x neural-network backpropagation

我实现了一个神经网络,每一层都是具有BP函数的类。我想使用QuickProp作为比普通BP更快的搜索算法。但是我的QuickProp实现似乎有问题,因为它没有收敛。如果您猜测输出,则错误停留的样子。 有人看到错误了吗?

# QuickPro initial values
self.weightsE_old = np.ones((inputSize, outputSize))-0.5
self.dw_old = np.random.uniform(-0.1, 0.1, size=(1, outputSize))

def BP(self, outputError, learningRate, learningAlgorithm):

   inputError = np.dot(outputError, self.weights.T) # Error for the next layer
   weightsError = np.dot(self.input.T, outputError) # dL/dw
   # avoid division by zero
   index = weightsError == self.weightsE_old
   self.weightsE_old[index==True] = 0.1

   dw = self.dw_old * (weightsError/(self.weightsE_old-weightsError))
   self.dw_old = dw.copy()
   self.weightsE_old = weightsError.copy() 

   # update parameters
   self.weights = self.weights - learningRate * dw 

   self.bias =self.bias - learningRate * outputError


   return inputError

0 个答案:

没有答案