反向传播算法 - 误差导数计算

时间:2012-02-26 22:48:26

标签: java artificial-intelligence neural-network backpropagation

在计算误差导数时,我正在使用以下工作但不确定原因。

double errorDerivative = (-output * (1-output) *(desiredOutput - output));

当我从第一个输出中删除减号时,它会失败并达到最大纪元限制。我假设这是通过查看这个不使用减号运算符的http://homepages.gold.ac.uk/nikolaev/311imlti.htm示例的样子。

double errorDerivative2 = (output * (1-output) *(desiredOutput - output));

我目前正在研究修改现有的BackPropagation实现,该实现使用随机梯度下降并希望仅使其使用标准反向传播算法。目前,它看起来像这样。

public void applyBackpropagation(double expectedOutput[]) {

        // error check, normalize value ]0;1[
        /*for (int i = 0; i < expectedOutput.length; i++) {
            double d = expectedOutput[i];
            if (d < 0 || d > 1) {
                if (d < 0)
                    expectedOutput[i] = 0 + epsilon;
                else
                    expectedOutput[i] = 1 - epsilon;
            }
        }*/

        int i = 0;
        for (Neuron n : outputLayer) {
            System.out.println("neuron");
            ArrayList<Connection> connections = n.getAllInConnections();
            for (Connection con : connections) {
                double output = n.getOutput();
                System.out.println("final output is "+output);
                double ai = con.leftNeuron.getOutput();
                System.out.println("ai output is "+ai);
                double desiredOutput = expectedOutput[i];

                double errorDerivative = (-output * (1-output) *(desiredOutput - output));
                double errorDerivative2 = (output * (1-output) *(desiredOutput - output));
                System.out.println("errorDerivative is "+errorDerivative);
                System.out.println("errorDerivative my one is "+(output * (1-output) *(desiredOutput - output)));
                double deltaWeight = -learningRate * errorDerivative2;
                double newWeight = con.getWeight() + deltaWeight;
                con.setDeltaWeight(deltaWeight);
                con.setWeight(newWeight + momentum * con.getPrevDeltaWeight());
            }
            i++;
        }

        // update weights for the hidden layer
        for (Neuron n : hiddenLayer) {
            ArrayList<Connection> connections = n.getAllInConnections();
            for (Connection con : connections) {
                double output = n.getOutput();
                double ai = con.leftNeuron.getOutput();
                double sumKoutputs = 0;
                int j = 0;
                for (Neuron out_neu : outputLayer) {
                    double wjk = out_neu.getConnection(n.id).getWeight();
                    double desiredOutput = (double) expectedOutput[j];
                    double ak = out_neu.getOutput();
                    j++;
                    sumKoutputs = sumKoutputs
                            + (-(desiredOutput - ak) * ak * (1 - ak) * wjk);
                }

                double partialDerivative = output * (1 - output) * ai * sumKoutputs;
                double deltaWeight = -learningRate * partialDerivative;
                double newWeight = con.getWeight() + deltaWeight;
                con.setDeltaWeight(deltaWeight);
                con.setWeight(newWeight + momentum * con.getPrevDeltaWeight());
            }
        }
    }

1 个答案:

答案 0 :(得分:2)

抱歉,我不会检查您的代码 - 没时间,您将不得不回来提出更具体的问题,然后我可以帮助您。

errorDerivative2的工作原理可能是您正在使用权重更新规则,例如
 deltaW = learningRate*errorDerivative2*input

Normaly你所谓的'errorDerivative2'被称为 delta ,被定义为
-output * (1-output) *(desiredOutput - output)
对于具有S形传递函数的神经元

使用权重更新规则
deltaW = -learningRate*delta*input

所以基本上它适用于errorDerivative2上没有减号的你,因为你在另一个地方留下了减号...