神经网络为每个输入返回相同的输出

时间:2016-04-20 14:44:18

标签: java neural-network artificial-intelligence classification

我在Java中编写了一个简单的人工神经网络作为项目的一部分。当我开始训练数据时(使用我收集的训练集)每个时期的错误计数很快稳定(达到约30%的准确度)然后停止。在测试ANN时,任何给定输入的所有输出都完全相同。

我正在尝试输出一个介于0和1之间的数字(0表示将股票归类为跌幅,1表示将冒险者分类 - 0.4-0.6表示稳定性)

在RapidMiner Studios中添加相同的训练数据时,会创建一个具有更高(70 +%)精度的正确ANN,因此我知道数据集很好。人工智能逻辑中肯定存在一些问题。

以下是运行和调整权重的代码。任何和所有帮助表示赞赏!

    public double[] Run(double[] inputs) {
    //INPUTS
    for (int i = 0; i < inputNeurons.length; i++) {
        inputNeurons[i] = inputs[i];
    }

    for (int i = 0; i < hiddenNeurons.length; i++) {
        hiddenNeurons[i] = 0;
    } //RESET THE HIDDEN NEURONS

    for (int e = 0; e < inputNeurons.length; e++) {
        for (int i = 0; i < hiddenNeurons.length; i++) {
            //Looping through each input neuron connected to each hidden neuron

            hiddenNeurons[i] += inputNeurons[e] * inputWeights[(e * hiddenNeurons.length) + i];
            //Summation (with the adding of neurons)  - Done by taking the sum of each (input * connection weight)
            //The more weighting a neuron has the more "important" it is in decision making
        }
    }

    for (int j = 0; j < hiddenNeurons.length; j++) {
        hiddenNeurons[j] = 1 / (1 + Math.exp(-hiddenNeurons[j]));
        //sigmoid function transforms the output into a real number between 0 and 1
    }

    //HIDDEN
    for (int i = 0; i < outputNeurons.length; i++) {
        outputNeurons[i] = 0;
    } //RESET THE OUTPUT NEURONS

    for (int e = 0; e < hiddenNeurons.length; e++) {
        for (int i = 0; i < outputNeurons.length; i++) {
            //Looping through each hidden neuron connected to each output neuron

            outputNeurons[i] += hiddenNeurons[e] * hiddenWeights[(e * outputNeurons.length) + i];
            //Summation (with the adding of neurons) as above
        }
    }

    for (int j = 0; j < outputNeurons.length; j++) {
        outputNeurons[j] = 1 / (1 + Math.exp(-outputNeurons[j])); //sigmoid function as above
    }

    double[] outputs = new double[outputNeurons.length];
    for (int j = 0; j < outputNeurons.length; j++) {
        //Places all output neuron values into an array
        outputs[j] = outputNeurons[j];
    }
    return outputs;
}

public double[] CalculateErrors(double[] targetValues) {
    //Compares the given values to the actual values
    for (int k = 0; k < outputErrors.length; k++) {
        outputErrors[k] = targetValues[k] - outputNeurons[k];
    }
    return outputErrors;
}

    public void tuneWeights() //Back Propagation
{
    // Start from the end - From output to hidden
    for (int p = 0; p < this.hiddenNeurons.length; p++)     //For all Hidden Neurons
    {
        for (int q = 0; q < this.outputNeurons.length; q++)  //For all Output Neurons
        {
            double delta = this.outputNeurons[q] * (1 - this.outputNeurons[q]) * this.outputErrors[q];
            //DELTA is the error for the output neuron q
            this.hiddenWeights[(p * outputNeurons.length) + q] += this.learningRate * delta * this.hiddenNeurons[p];
            /*Adjust the particular weight relative to the error
             *If the error is large, the weighting will be decreased
             *If the error is small, the weighting will be increased
             */
        }
    }

    // From hidden to inps -- Same as above
    for (int i = 0; i < this.inputNeurons.length; i++)       //For all Input Neurons
    {
        for (int j = 0; j < this.hiddenNeurons.length; j++)  //For all Hidden Neurons
        {
            double delta = this.hiddenNeurons[j] * (1 - this.hiddenNeurons[j]);
            double x = 0;       //We do not have output errors here so we must use extra data from Output Neurons
            for (int k = 0; k < this.outputNeurons.length; k++) {
                double outputDelta = this.outputNeurons[k] * (1 - this.outputNeurons[k]) * this.outputErrors[k];
                //We calculate the output delta again
                x = x + outputDelta * this.hiddenWeights[(j * outputNeurons.length) + k];
                //We then calculate the error based on the hidden weights (x is used to add the error values of all weights)
                delta = delta * x;
            }
            this.inputWeights[(i * hiddenNeurons.length) + j] += this.learningRate * delta * this.inputNeurons[i];
            //Adjust weight like above
        }
    }
}

1 个答案:

答案 0 :(得分:2)

经过长时间的报道后,我认为您可以通过以下几点找到答案:

  1. 偏见非常重要。实际上 - 关于神经网络最受欢迎的SO问题之一是关于偏见:): Role of Bias in Neural Networks
  2. 你应该照看你的学习过程。跟踪您对准确性和验证集的测试以及在培训期间使用适当的学习率是很好的。我建议你使用更简单的数据集,当你知道很容易找到真正的解决方案时(例如 - 三角形或正方形 - 然后使用4 - 5个隐藏单位)。我还建议你使用以下playgroud:
  3. http://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.03&regularizationRate=0&noise=0&networkShape=4,2&seed=0.36368&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification