如何实现隐藏层的神经网络?

时间:2012-06-05 03:22:56

标签: matlab machine-learning

我正在尝试训练一个3输入,1输出神经网络(带有输入层,一个隐藏层和一个输出层),可以在MATLAB中对quadratics进行分类。我正在尝试实施前馈阶段,$ x_i ^ {out} = f(s_i)$,$ s_i = {\ sum} _ {\ substack {j \\}} w_ {ij} x_j ^ {in} $ back-propagation $ {\ delta} _j ^ {in} = f'(s_i){\ sum} _ {\ substack {j \\}} {\ delta} _i ^ {out} w_ {ij} $并更新$ w_ {ij} ^ {new} = w_ {ij} ^ {old} - \ epsilon {\ delta} _i ^ {out} x_j ^ {in} $,其中$ x $是输入向量,$ w $是体重和$ \ epsilon $是学习率。

我编写隐藏图层并添加激活函数$ f(s)= tanh(s)$时遇到麻烦,因为网络输出中的错误似乎没有减少。有人可以指出我实施的错误吗?

输入是二次$ ax ^ 2 + bx + c = 0 $的实系数,如果二次方具有两个实根,则输出应为正,如果不是,则输出应为负。

nTrain = 100; % training set
nOutput = 1;
nSecondLayer = 7; % size of hidden layer (arbitrary)
trainExamples = rand(4,nTrain); % independent random set of examples
trainExamples(4,:) = ones(1,nTrain);  % set the dummy input to be 1

T = sign(trainExamples(2,:).^2-4*trainExamples(1,:).*trainExamples(3,:)); % The teacher provides this for every example
%The student neuron starts with random weights
w1 = rand(4,nSecondLayer);
w2 = rand(nSecondLayer,nOutput);
nepochs=0;
nwrong = 1;
S1(nSecondLayer,nTrain) = 0;
S2(nOutput,nTrain) = 0; 

while( nwrong>1e-2 )  % more then some small number close to zero
    for i=1:nTrain
        x = trainExamples(:,i);
        S2(:,i) = w2'*S1(:,i);
        deltak = tanh(S2(:,i)) - T(:,i); % back propagate
        deltaj = (1-tanh(S2(:,i)).^2).*(w2*deltak); % back propagate      
        w2 = w2 - tanh(S1(:,i))*deltak'; % updating
        w1 = w1- x*deltaj'; % updating  
    end
   output = tanh(w2'*tanh(w1'*trainExamples));
   dOutput = output-T;
   nwrong = sum(abs(dOutput));
   disp(nwrong)
   nepochs = nepochs+1          
end
nepochs

由于

1 个答案:

答案 0 :(得分:2)

在我的头撞墙后几天,我发现了一个小错字。以下是一个有效的解决方案:

clear
% Set up parameters
nInput = 4; % number of nodes in input
nOutput = 1; % number of nodes in output
nHiddenLayer = 7; % number of nodes in th hidden layer
nTrain = 1000; % size of training set
epsilon = 0.01; % learning rate


% Set up the inputs: random coefficients between -1 and 1
trainExamples = 2*rand(nInput,nTrain)-1;
trainExamples(nInput,:) = ones(1,nTrain);  %set the last input to be 1

% Set up the student neurons for both hidden and the output layers
S1(nHiddenLayer,nTrain) = 0;
S2(nOutput,nTrain) = 0;

% The student neuron starts with random weights from both input and the hidden layers
w1 = rand(nInput,nHiddenLayer);
w2 = rand(nHiddenLayer+1,nOutput);

% Calculate the teacher outputs according to the quadratic formula
T = sign(trainExamples(2,:).^2-4*trainExamples(1,:).*trainExamples(3,:));


% Initialise values for looping
nEpochs = 0;
nWrong = nTrain*0.01;
Wrong = [];
Epoch = [];

while(nWrong >= (nTrain*0.01)) % as long as more than 1% of outputs are wrong
    for i=1:nTrain
        x = trainExamples(:,i);
        S1(1:nHiddenLayer,i) = w1'*x;
        S2(:,i) = w2'*[tanh(S1(:,i));1];
        delta1 = tanh(S2(:,i)) - T(:,i); % back propagate
        delta2 = (1-tanh(S1(:,i)).^2).*(w2(1:nHiddenLayer,:)*delta1); % back propagate       
        w1 = w1 - epsilon*x*delta2'; % update
        w2 = w2 - epsilon*[tanh(S1(:,i));1]*delta1'; % update
    end

    outputNN = sign(tanh(S2));
    delta = outputNN - T; % difference between student and teacher
    nWrong = sum(abs(delta/2));
    nEpochs = nEpochs + 1;
    Wrong = [Wrong nWrong];
    Epoch = [Epoch nEpochs];
end
plot(Epoch,Wrong);