python的neurolab的newff和train函数为相同的代码和输入提供了不一致的结果

时间:2016-05-01 20:38:01

标签: python machine-learning gradient-descent

虽然输入相同且代码相同,但在多次运行时会得到两个不同的结果。但是只有两个独特的输出。我不知道代码的哪一部分是随机的,而且我很难搞清楚错误的位置。这是神经胶原中的一个已知错误吗?

我已附上以下完整代码。请运行9-10次以查看两个不同的输出。 我还附加了相同代码的五次运行的输出,我看到错误输出在五次运行中有两个不同的值。请帮忙。

代码: --------

import neurolab as nl
import numpy as np

# Create train samples

N = 200;

## DATA
x1 = [0]*(N+1);

for ii in range(-N/2,N/2+1,1):

    x1[ii+N/2] = ii;

x1_arr = np.array(x1);

y1 = -2+ 3*x1_arr ;

y = [0]*len(y1);

for ii in range(len(y1)):

    if(y1[ii] > 15):

        y[ii] = 1;

l = len(y);

x0 = [1]*l;

x0_arr = np.array(x0);

x_arr = np.concatenate(([x0_arr], [x1_arr]), axis=0)

x = x1_arr;

y_arr = np.array(y);

size = l;

inp = x.reshape(size,1)

tar = y_arr.reshape(size,1)

# Create network with 2 layers and random initialized

net = nl.net.newff([[-N/2, N/2]],[1, 1])

net.trainf =  nl.train.train_gd;

# Train network
error = net.train(inp, tar, epochs=100, show=100, goal=0.02, lr = 0.001)

# Simulate network
out = net.sim(inp);

输出继电器 ---------

>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.49617137968;
The maximum number of train epochs is reached
>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.49617137968;
The maximum number of train epochs is reached
>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.66289633422;
The maximum number of train epochs is reached
>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.49617137968;
The maximum number of train epochs is reached
>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.66289633422;
The maximum number of train epochs is reached

谢谢,干杯!

1 个答案:

答案 0 :(得分:0)

神经网络训练不是确定性的。它从权重的随机初始化开始,并执行(本质上是贪婪的)优化过程。除非你修复了nn训练中使用的所有随机数生成器,否则你不能指望完全相同的结果。