如何在MATLAB中访问隐藏层神经元输出的值?

时间:2017-01-06 03:01:17

标签: matlab neural-network

MATLAB在神经网络工具箱中不允许一层具有多种传递函数。所以,我想创建2个隐藏层,一个具有双曲正切传递函数,另一个具有rbf神经元。

但是我需要直接将第一个隐藏层的输出传递给输出层。我想改变第二个隐藏层的输出,使它等于第一个隐藏层的输出。为此,我需要访问隐藏层的输出值。 我打开了traingd函数,我认为我想要的是这段代码的某些部分:

for epoch=0:param.epochs

    % Stopping Criteria
    if isMainWorker
        current_time = etime(clock,startTime);
        [userStop,userCancel] = nntraintool('check');
        if userStop, tr.stop = 'User stop.'; calcNet = best.net;
        elseif userCancel, tr.stop = 'User cancel.'; calcNet = original_net;
        elseif (perf <= param.goal), tr.stop = 'Performance goal met.'; calcNet = best.net;
        elseif (epoch == param.epochs), tr.stop = 'Maximum epoch reached.'; calcNet = best.net;
        elseif (current_time >= param.time), tr.stop = 'Maximum time elapsed.'; calcNet = best.net;
        elseif (gradient <= param.min_grad), tr.stop = 'Minimum gradient reached.'; calcNet = best.net;
        elseif (val_fail >= param.max_fail), tr.stop = 'Validation stop.'; calcNet = best.net;
        end

        % Training record & feedback
        tr = nntraining.tr_update(tr,[epoch current_time perf vperf tperf gradient val_fail]);
        statusValues = [epoch,current_time,best.perf,gradient,val_fail];
        nn_train_feedback('update',archNet,rawData,calcLib,calcNet,tr,status,statusValues);
        stop = ~isempty(tr.stop);
    end

    % Stop
    if isParallel, stop = labBroadcast(mainWorkerInd,stop); end
    if stop, return, end

    % Gradient Descent
    if isMainWorker
        dWB = param.lr * gWB;
        WB = WB + dWB;
    end

    calcNet = calcLib.setwb(calcNet,WB);
    [perf,vperf,tperf,gWB,gradient] = calcLib.perfsGrad(calcNet);

    % Validation
    if isMainWorker
        [best,tr,val_fail] = nntraining.validation(best,tr,val_fail,calcNet,perf,vperf,epoch);
    end
end

我发现偏倚和突触的更新位置,但我无法找到设置神经元输出值的位置。有人能帮助我吗?

1 个答案:

答案 0 :(得分:0)

您可以创建两个没有隐藏图层的自定义网络,然后您可以使用2个不同的功能直接查看输出。请参阅文档https://www.mathworks.com/help/nnet/ug/create-and-train-custom-neural-network-architectures.html

您也可以使用权重和偏差手动计算它们。

input_weights = cell2mat(net.iw) // getting the input weights bias = net.b{1} //choosing the bias of the first layer hidden_layer1 = tanh(input*input_weights+repmat(bias,1,input_size)) //the dot product of the input and input_weights will give some matrix, hence why the bias must be the same size

旁注:我发现本教程有助于理解基本神经网络的工作原理 - https://iamtrask.github.io/2015/07/12/basic-python-network/