Matlab神经网络进行回归

时间:2018-07-11 09:26:09

标签: matlab machine-learning neural-network regression backpropagation

我已经实现了3种神经网络回归功能:

1)前向传播函数,给定训练输入,网络结构计算预测输出

function [y_predicted] = forwardProp(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch)
for i = 1:size(Inputs{1},2)
Activation = (Inputs{1}(:,i))';
for j = 2:NumberOfLayers - RegressionSwitch
    Activation = 1./(1+exp(-(Activation*Theta{j-1} + Baias{j-1})));
end
if RegressionSwitch == 1
    y_predicted(:,i) = Activation*Theta{end} + Baias{end};
else
    y_predicted(:,i) = Activation;
end
end
end

2)一个成本函数,该函数给出预测的和期望的输出,以计算网络成本

function [Cost] = costFunction(y_predicted, y, Theta, Baias, Lambda)
Cost = 0;
for j = 1:size(y,2)
for i = 1:size(y,1)
   Cost = Cost +(((y(i,j) - y_predicted(i,j))^2)/size(y,2));
end
end
Reg = 0;
for i = 1:size(Theta, 2)
for j = 1:size(Theta{i}, 1)
    for k = 1:size(Theta{i}, 2)
        Reg = Reg + (Theta{i}(j,k))^2;
    end
end
end
for i = 1:size(Baias, 2)
for j = 1:length(Baias{i})
    Reg = Reg + (Baias{i}(j))^2;
end
end
Cost = Cost + (Lambda/(2*size(y,2)))*Reg;
end

3)反向传播函数,用于计算网络中每个权重的成本函数的偏导数

function [dTheta, dBaias] = Deltas(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch, Epsilon, Lambda, y)
for i = 1:size(Theta,2)
for j = 1:size(Theta{i},1)
    for k = 1:size(Theta{i},2)
        dTp = Theta;
        dTm = Theta;
        dTp{i}(j,k) = dTp{i}(j,k) + Epsilon;
        dTm{i}(j,k) = dTm{i}(j,k) - Epsilon;
        y_predicted_p = forwardProp(dTp,Baias,Inputs,NumberOfLayers,RegressionSwitch);
        y_predicted_m = forwardProp(dTm,Baias,Inputs,NumberOfLayers,RegressionSwitch);
        Cost_p = costFunction(y_predicted_p, y, dTp, Baias, Lambda);
        Cost_m = costFunction(y_predicted_m, y, dTm, Baias, Lambda);
        dTheta{i}(j,k) = (Cost_p - Cost_m)/(2*Epsilon);
    end
end
 end
 for i = 1:size(Baias,2)
for j = 1:length(Baias{i})
    dBp = Baias;
    dBm = Baias;
    dBp{i}(j) = dTp{i}(j) + Epsilon;
    dBm{i}(j) = dTm{i}(j) - Epsilon;
    y_predicted_p = forwardProp(Theta,dBp,Inputs,NumberOfLayers,RegressionSwitch);
    y_predicted_m =forwardProp(Theta,dBm,Inputs,NumberOfLayers,RegressionSwitch);
    Cost_p = costFunction(y_predicted_p, y, Theta, dBp, Lambda);
    Cost_m = costFunction(y_predicted_m, y, Theta, dBm, Lambda); 
    dBaias{i}(j) = (Cost_p - Cost_m)/(2*Epsilon);       
end end end

我用来自输入的精确数学函数的数据训练神经网络。

梯度下降似乎随着每次迭代的成本降低而起作用,但是当我测试训练有素的网络时,回归非常糟糕。

这些功能并不是要高效,但是它们应该起作用,所以我很沮丧地看到它们不起作用...主要功能和数据都还可以,所以问题应该在这里。您能帮我发现它吗?

这是“主要”:

clear;
clc;
Nodes_X = 5;
Training_Data = 1000;
x = rand(Nodes_X, Training_Data)*3;
y = zeros(2,Training_Data);
for j = 1:Nodes_X
    for i = 1:Training_Data
        y(1,i) = (x(1,i)^2)+x(2,i)-x(3,i)+2*x(4,i)/x(5,i);
        y(2,i) = (x(5,i)^2)+x(2,i)-x(3,i)+2*x(4,i)/x(1,i);
    end
end
vx = rand(Nodes_X, Training_Data)*3;
vy = zeros(2,Training_Data);
for j = 1:Nodes_X
    for i = 1:Training_Data
        vy(1,i) = (vx(1,i)^2)+vx(2,i)-vx(3,i)+2*vx(4,i)/vx(5,i);
        vy(2,i) = (vx(5,i)^2)+vx(2,i)-vx(3,i)+2*vx(4,i)/vx(1,i);
    end
end
%%%%%%%%%%%%%%%%%%%%%%ASSIGN NODES TO EACH LAYER%%%%%%%%%%%%%%%%%%%%%%%%%%%
NumberOfLayers = 4;
Nodes(1) = 5;
Nodes(2) = 10;
Nodes(3) = 10;
Nodes(4) = 2;
if length(Nodes) ~= NumberOfLayers || (Nodes(1)) ~= size(x, 1)
    WARNING = msgbox('Nodes assigned incorrectly!');
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%INITIALIZATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for i = 1:NumberOfLayers-1
    Theta{i} = rand(Nodes(i),Nodes(i+1));
    Baias{i} = rand(1,Nodes(i+1));
end
Inputs{1} = x;
Outputs{1} = y;
RegressionSwitch = 1;
Lambda = 10;
Epsilon = 0.00001;
Alpha = 0.01;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%TRAINING%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Epoch = 0;
figure;
hold on;
while Epoch <=20
%%%%%%%%%%%%%%%%%%%%FORWARD PROPAGATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
y_predicted = forwardProp(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%COST%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Cost = costFunction(y_predicted, y, Theta, Baias, Lambda);
scatter(Epoch,Cost);
pause(0.01);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%BACK PROPAGATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[dTheta, dBaias] = Deltas(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch, Epsilon, Lambda, y);
%%%%%%%%%%%%%%%GRADIENT DESCENT%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 for i = 1:size(Theta,2)
     Theta{i} = Theta{i}-Alpha*dTheta{i};
 end
  for i = 1:size(Baias,2)
     Baias{i} = Baias{i}-Alpha*dBaias{i};
  end
  Epoch = Epoch + 1;
end
hold off;
V_Inputs{1} = vx;
V_y_predicted = forwardProp(Theta,Baias,V_Inputs,NumberOfLayers,RegressionSwitch);
figure;
hold on;
for i = 1:size(vy,2)
    scatter(vy(1,i),V_y_predicted(1,i));
    pause(0.01);
end
hold off;
figure;
hold on;
for i = 1:size(vy,2)
    scatter(vy(2,i),V_y_predicted(2,i));
    pause(0.01);
end
hold off;

0 个答案:

没有答案