神经网络非线性时间序列python中的Narx模型

时间:2016-11-27 21:32:13

标签: python matlab neural-network conv-neural-network recurrent-neural-network

我试图创建一个神经网络 - 非线性时间序列 NARX模型

算法我的输入是

1-2D矩阵(x,y)

2 - 另一个2D矩阵(x,y)

并且目标是此2D矩阵(x,y)

中的实际精确值

首先我搜索了一下,然后用 MATLAB 模拟了这个网络,我有一个

我将在 DOWN

中显示效果良好

然而,我想实现这个模型" NARX MODEL "在 PYTHON

我搜索了(NARX MODEL)的算法,我没有得到我想要的结果:

1 - 有人给我任何参考网站 BOOKs 视频系列

2 - 或向我显示 WAY 以正确搜索此特定任务

3 - 或者给我一个步骤,用python 等效制作代码到** MATLAB" NARX

源代码和功能" **

这是CODE MATLAB:

    % Solve an Autoregression Problem with External Input with a NARX   Neural Network              
    % Script generated by NTSTOOL
    % Created Wed Nov 09 20:28:50 EET 2016
    %
    % This script assumes these variables are defined:
    %
    %   input- input time series.
    %   output- feedback time series.

    inputSeries = tonndata(input,true,false);
    targetSeries = tonndata(output,true,false);

    % Create a Nonlinear Autoregressive Network with External Input
    inputDelays = 1:2;
    feedbackDelays = 1:2;
    hiddenLayerSize = 10;
    net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);

    % Choose Input and Feedback Pre/Post-Processing Functions
    % Settings for feedback input are automatically applied to feedback output
    % For a list of all processing functions type: help nnprocess
    % Customize input parameters at: net.inputs{i}.processParam
    % Customize output parameters at: net.outputs{i}.processParam
    net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
    net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};

    % Prepare the Data for Training and Simulation
    % The function PREPARETS prepares timeseries data for a particular network,
    % shifting time by the minimum amount to fill input states and layer states.
    % Using PREPARETS allows you to keep your original time series data unchanged, while
    % easily customizing it for networks with differing numbers of delays, with
    % open loop or closed loop feedback modes.
    [inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,   {},targetSeries);

    % Setup Division of Data for Training, Validation, Testing
    % The function DIVIDERAND randomly assigns target values to training,
    % validation and test sets during training.
    % For a list of all data division functions type: help nndivide
    net.divideFcn = 'dividerand';  % Divide data randomly
    % The property DIVIDEMODE set to TIMESTEP means that targets are divided
    % into training, validation and test sets according to timesteps.
    % For a list of data division modes type: help nntype_data_division_mode
    net.divideMode = 'value';  % Divide up every value
     net.divideParam.trainRatio = 70/100;
     net.divideParam.valRatio = 15/100;
     net.divideParam.testRatio = 15/100;

     % Choose a Training Function
     % For a list of all training functions type: help nntrain
     % Customize training parameters at: net.trainParam
     net.trainFcn = 'trainlm';  % Levenberg-Marquardt

     % Choose a Performance Function
     % For a list of all performance functions type: help nnperformance
     % Customize performance parameters at: net.performParam
     net.performFcn = 'mse';  % Mean squared error

    % Choose Plot Functions
    % For a list of all plot functions type: help nnplot
    % Customize plot parameters at: net.plotParam
    net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
   'ploterrcorr', 'plotinerrcorr'};

    % Train the Network
   [net,tr] = train(net,inputs,targets,inputStates,layerStates);

   % Test the Network
   outputs = net(inputs,inputStates,layerStates);
   errors = gsubtract(targets,outputs);
    performance = perform(net,targets,outputs)

    % Recalculate Training, Validation and Test Performance
   trainTargets = gmultiply(targets,tr.trainMask);
  valTargets = gmultiply(targets,tr.valMask);
    testTargets = gmultiply(targets,tr.testMask);
     trainPerformance = perform(net,trainTargets,outputs)
      valPerformance = perform(net,valTargets,outputs)
      testPerformance = perform(net,testTargets,outputs)

         % View the Network
       view(net)

          % Plots
         % Uncomment these lines to enable various plots.
          %figure, plotperform(tr)
       %figure, plottrainstate(tr)
          %figure, plotregression(targets,outputs)
        %figure, plotresponse(targets,outputs)
      %figure, ploterrcorr(errors)
     %figure, plotinerrcorr(inputs,errors)

            % Closed Loop Network
        % Use this network to do multi-step prediction.
       % The function CLOSELOOP replaces the feedback input with a direct
          % connection from the outout layer.
        netc = closeloop(net);
         netc.name = [net.name ' - Closed Loop'];
         view(netc)
         [xc,xic,aic,tc] = preparets(netc,inputSeries,{},targetSeries);
      yc = netc(xc,xic,aic);
          closedLoopPerformance = perform(netc,tc,yc)

          % Early Prediction Network
           % For some applications it helps to get the prediction a timestep        early.
       % The original network returns predicted y(t+1) at the same time it is given y(t+1).
         % For some applications such as decision making, it would help to have predicted
         % y(t+1) once y(t) is available, but before the actual y(t+1) occurs.
       % The network can be made to return its output a timestep early by removing one delay
       % so that its minimal tap delay is now 0 instead of 1.  The new network returns the
     % same outputs as the original network, but outputs are shifted left one timestep.
    nets = removedelay(net);
       nets.name = [net.name ' - Predict One Step Ahead'];
       view(nets)
       [xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
       ys = nets(xs,xis,ais);
      earlyPredictPerformance = perform(nets,ts,ys)

以下是输入

  

这是x坐标

1 4 7 9 11 17 14 16 18 19 
  

这是y坐标

1 2 4 6 7  8  10 10 13 14
  

这是另一个x坐标

1 7 10 13 16 18 19 23 24 25
  

这是另一个y坐标

1 5 7 9 12 14 16 17 19 20

这是目标

  

这是实际的x坐标

1 4 5 8 9 15 17 18 20 22
  

这是实际的y坐标

1 1 4 7 8 10 13 14 18 20

并且结果对于两个输入中的这个大错误都足够好与输出相比 但是通过改变神经,我们可以增强这个输出

[5.00163468043085;3.99820942369434]

[8.00059395052246;6.99872447652641] 

[11.5625431537178;8.00040094120297] 

[14.9982223917152;9.24359668634943] 

[19.3511330333522;13.0001065644369] 

[18.4627579643821;13.9999624796494] 

[20.0004073095041;17.9997197490528] 

[22.0004822590849;19.9997852867243]

希望这一点足够明确

提前致谢

1 个答案:

答案 0 :(得分:1)

PyNeurGen是您问题的可能解决方案。 它是一个支持多种网络架构的python库。

该库还包含一个前馈网络演示。

使用NARX net时,您可以使用以下定义:NARX Net PyNeurGen