在Matlab中使用feedforwardnet模拟默认的patternnet?

时间:2015-04-08 19:00:05

标签: matlab neural-network

我通过以下网络获得了非常不同的培训效率

net = patternnet(hiddenLayerSize);

以及以下

net = feedforwardnet(hiddenLayerSize, 'trainscg');
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.performFcn = 'crossentropy';

关于相同的数据。

我在想网络应该是一样的。

我忘记了什么?

更新

下面的代码演示了网络行为的独特性取决于网络创建功能。

每种类型的网络都运行了两次。这排除了随机生成器问题或其他问题。数据是一样的。

hiddenLayerSize = 10;

% pass 1, with patternnet
net = patternnet(hiddenLayerSize);

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 1, patternnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 2, with feedforwardnet
net = feedforwardnet(hiddenLayerSize, 'trainscg');
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.performFcn = 'crossentropy';

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 2, feedforwardnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 1, with patternnet
net = patternnet(hiddenLayerSize);

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 3, patternnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

% pass 2, with feedforwardnet
net = feedforwardnet(hiddenLayerSize, 'trainscg');
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'softmax';
net.performFcn = 'crossentropy';

net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

[net,tr] = train(net,x,t);

y = net(x);
performance = perform(net,t,y);

fprintf('pass 4, feedforwardnet, performance: %f\n', performance);
fprintf('num_epochs: %d, stop: %s\n', tr.num_epochs, tr.stop);

输出如下:

pass 1, patternnet, performance: 0.116445
num_epochs: 353, stop: Validation stop.
pass 2, feedforwardnet, performance: 0.693561
num_epochs: 260, stop: Validation stop.
pass 3, patternnet, performance: 0.116445
num_epochs: 353, stop: Validation stop.
pass 4, feedforwardnet, performance: 0.693561
num_epochs: 260, stop: Validation stop.

2 个答案:

答案 0 :(得分:1)

看起来这两者并不完全相同:

>> net = patternnet(hiddenLayerSize);
>> net2 = feedforwardnet(hiddenLayerSize,'trainscg');
>> net.outputs{2}.processParams{2}

ans =

    ymin: 0
    ymax: 1

>> net2.outputs{2}.processParams{2}

ans =

    ymin: -1
    ymax: 1

net.outputs{2}.processFcns{2}mapminmax所以我认为其中一个正在重新调整其输出以更好地匹配实际数据的输出范围。

为了将来参考,你可以做一些讨厌的脏事,比如通过强制转换为struct来比较内部数据结构。所以我做了类似

的事情
n = struct(net); n2 = struct(net2);
for fn=fieldnames(n)';
  if(~isequaln(n.(fn{1}),n2.(fn{1})))
    fprintf('fields %s differ\n', fn{1});
  end
end

帮助查明差异。

答案 1 :(得分:0)

由于通常网络不是每次训练都表现完全相同。它可能取决于三个(我的意思是我知道三个)原因:

  1. 神经网络的初始初始化。
  2. 数据规范化
  3. 数据缩放
    如果要谈论(1)网络最初配置有随机权重,在一些小范围内具有不同的符号。例如,具有6个输入的神经元可以获得如下的初始权重:0.1,-0.3,0.16,-0.23,0.015,-0.0005。这可能会导致另一个训练结果。如果要谈论(2)如果你的规范化执行得不好,那么学习算法会收敛到局部最小值,并且不能跳出它。如果您的数据需要扩展,并且您没有成功,那么情况(3)也是如此。