我正在建立一个测试神经网络,它肯定无法正常工作。我的主要问题是反向传播。根据我的研究,我知道使用sigmoid函数很容易。因此,我通过(1-Output)(输出)(目标 - 输出)更新每个权重,但问题是如果我的输出为1但我的目标不是?如果它在某个时刻是一个,则权重更新将始终为0 ...现在我只是想尝试添加来自2个输入神经元的输入,因此最佳权重应该只是1作为输出神经元只需添加其输入。我确定我在很多地方搞砸了这个,但这是我的代码:
public class Main {
public static void main(String[] args) {
Double[] inputs = {1.0, 2.0};
ArrayList<Double> answers = new ArrayList<Double>();
answers.add(3.0);
net myNeuralNet = new net(2, 1, answers);
for(int i=0; i<200; i++){
myNeuralNet.setInputs(inputs);
myNeuralNet.start();
myNeuralNet.backpropagation();
myNeuralNet.printOutput();
System.out.println("*****");
for(int j=0; j<myNeuralNet.getOutputs().size(); j++){
myNeuralNet.getOutputs().get(j).resetInput();
myNeuralNet.getOutputs().get(j).resetOutput();
myNeuralNet.getOutputs().get(j).resetNumCalled();
}
}
}
}
package myneuralnet;
import java.util.ArrayList;
public class net {
private ArrayList<neuron> inputLayer;
private ArrayList<neuron> outputLayer;
private ArrayList<Double> answers;
public net(Integer numInput, Integer numOut, ArrayList<Double> answers){
inputLayer = new ArrayList<neuron>();
outputLayer = new ArrayList<neuron>();
this.answers = answers;
for(int i=0; i<numOut; i++){
outputLayer.add(new neuron(true));
}
for(int i=0; i<numInput; i++){
ArrayList<Double> randomWeights = createRandomWeights(numInput);
inputLayer.add(new neuron(outputLayer, randomWeights, -100.00, true));
}
for(int i=0; i<numOut; i++){
outputLayer.get(i).setBackConn(inputLayer);
}
}
public ArrayList<neuron> getOutputs(){
return outputLayer;
}
public void backpropagation(){
for(int i=0; i<answers.size(); i++){
neuron iOut = outputLayer.get(i);
ArrayList<neuron> iOutBack = iOut.getBackConn();
Double iSigDeriv = (1-iOut.getOutput())*iOut.getOutput();
Double iError = (answers.get(i) - iOut.getOutput());
System.out.println("Answer: "+answers.get(i) + " iOut: "+iOut.getOutput()+" Error: "+iError+" Sigmoid: "+iSigDeriv);
for(int j=0; j<iOutBack.size(); j++){
neuron jNeuron = iOutBack.get(j);
Double ijWeight = jNeuron.getWeight(i);
System.out.println("ijWeight: "+ijWeight);
System.out.println("jNeuronOut: "+jNeuron.getOutput());
jNeuron.setWeight(i, ijWeight+(iSigDeriv*iError*jNeuron.getOutput()));
}
}
for(int i=0; i<inputLayer.size(); i++){
inputLayer.get(i).resetInput();
inputLayer.get(i).resetOutput();
}
}
public ArrayList<Double> createRandomWeights(Integer size){
ArrayList<Double> iWeight = new ArrayList<Double>();
for(int i=0; i<size; i++){
Double randNum = (2*Math.random())-1;
iWeight.add(randNum);
}
return iWeight;
}
public void setInputs(Double[] is){
for(int i=0; i<is.length; i++){
inputLayer.get(i).setInput(is[i]);
}
for(int i=0; i<outputLayer.size(); i++){
outputLayer.get(i).resetInput();
}
}
public void start(){
for(int i=0; i<inputLayer.size(); i++){
inputLayer.get(i).fire();
}
}
public void printOutput(){
for(int i=0; i<outputLayer.size(); i++){
System.out.println(outputLayer.get(i).getOutput().toString());
}
}
}
package myneuralnet;
import java.util.ArrayList;
public class neuron {
private ArrayList<neuron> connections;
private ArrayList<neuron> backconns;
private ArrayList<Double> weights;
private Double threshold;
private Double input;
private Boolean isOutput = false;
private Boolean isInput = false;
private Double totalSignal;
private Integer numCalled;
private Double myOutput;
public neuron(ArrayList<neuron> conns, ArrayList<Double> weights, Double threshold){
this.connections = conns;
this.weights = weights;
this.threshold = threshold;
this.totalSignal = 0.00;
this.numCalled = 0;
this.backconns = new ArrayList<neuron>();
this.input = 0.00;
}
public neuron(ArrayList<neuron> conns, ArrayList<Double> weights, Double threshold, Boolean isin){
this.connections = conns;
this.weights = weights;
this.threshold = threshold;
this.totalSignal = 0.00;
this.numCalled = 0;
this.backconns = new ArrayList<neuron>();
this.input = 0.00;
this.isInput = isin;
}
public neuron(Boolean tf){
this.connections = new ArrayList<neuron>();
this.weights = new ArrayList<Double>();
this.threshold = 0.00;
this.totalSignal = 0.00;
this.numCalled = 0;
this.isOutput = tf;
this.backconns = new ArrayList<neuron>();
this.input = 0.00;
}
public void setInput(Double input){
this.input = input;
}
public void setOut(Boolean tf){
this.isOutput = tf;
}
public void resetNumCalled(){
numCalled = 0;
}
public void setBackConn(ArrayList<neuron> backs){
this.backconns = backs;
}
public Double getOutput(){
return myOutput;
}
public Double getInput(){
return totalSignal;
}
public Double getRealInput(){
return input;
}
public ArrayList<Double> getWeights(){
return weights;
}
public ArrayList<neuron> getBackConn(){
return backconns;
}
public Double getWeight(Integer i){
return weights.get(i);
}
public void setWeight(Integer i, Double d){
weights.set(i, d);
}
public void setOutput(Double d){
myOutput = d;
}
public void activation(Double myInput){
numCalled++;
totalSignal += myInput;
if(numCalled==backconns.size() && isOutput){
System.out.println("Total Sig: "+totalSignal);
setInput(totalSignal);
setOutput(totalSignal);
}
}
public void activation(){
Double activationValue = 1 / (1 + Math.exp(input));
setInput(activationValue);
fire();
}
public void fire(){
for(int i=0; i<connections.size(); i++){
Double iWeight = weights.get(i);
neuron iConn = connections.get(i);
myOutput = (1/(1+(Math.exp(-input))))*iWeight;
iConn.activation(myOutput);
}
}
public void resetInput(){
input = 0.00;
totalSignal = 0.00;
}
public void resetOutput(){
myOutput = 0.00;
}
}
好的,这是很多代码,所以请允许我解释一下。网络现在很简单,只是一个输入层和一个输出层---我想稍后添加一个隐藏层,但我现在正在采取婴儿步骤。每层都是神经元的arraylist。输入神经元加载了输入,在本例中为1和a 2。这些神经元触发,它计算输入和输出到输出神经元的sigmoid,它将它们相加并存储该值。然后通过(回答 - 输出)(输出)(1-输出)(特定输入神经元的输出)进行网络反向传播并相应地更新权重。很多时候,它循环通过,我得到无穷大,这似乎与负重量或sigmoid相关。当没有发生时,它会收敛到1,因为(1的1输出)是0,我的权重停止更新。
numCalled和totalSignal值正好让算法在继续之前等待所有神经元输入。我知道我这样做很奇怪,但是神经元类有一个神经元的神经元,称为连接,用于保持它向前连接的神经元。另一个名为backconns的arraylist持有反向连接。我应该更新正确的权重,因为我得到神经元i和j之间的所有反向连接但是所有神经元j(上面的层)我只是拉力量i。我为这种混乱道歉 - 我现在几个小时都在尝试很多事情,但仍然无法理解。非常感谢任何帮助!
答案 0 :(得分:1)
一些关于神经网络的最佳教科书一般是Chris Bishop和Simon Haykin。尝试阅读关于backprop的章节并理解为什么权重更新规则中的术语是它们的方式。我要求你这样做的原因是backprop比起初看起来更微妙。如果您对输出图层使用线性激活函数,事情会发生一些变化(想想您可能想要这样做的原因。提示:后处理),或者如果添加隐藏图层。当我真正阅读这本书时,它变得更加清晰。
答案 1 :(得分:0)
您可能希望将代码与此单层感知器进行比较。
我认为你的反向算法中有一个错误。另外,尝试用方波代替sigmoid。
http://web.archive.org/web/20101228185321/http://en.literateprograms.org/Perceptron_%28Java%29
答案 2 :(得分:0)
如果我的输出为1但我的目标不是?
sigmoid函数1 /(1 + Math.exp(-x))永远不等于1.当x接近无穷大时,lim等于0,但这是一个水平渐近线,因此该函数实际上从未触及1。因此,如果使用此表达式计算所有输出值,那么输出将永远不会为1.因此(1 - 输出)不应该等于0.
我认为您的问题是在计算输出期间。对于神经网络,每个神经元的输出通常是S形(输入和权重的点积)。换句话说,value = input1 * weight1 + input2 * weight2 + ...(对于神经元的每个权重)+ biasWeight。然后神经元的输出= 1 /(1 + Math.exp(-value)。如果以这种方式计算,输出将不会等于1.