Python神经网络中不需要的[Nan]输出

时间:2019-09-16 17:52:58

标签: python numpy machine-learning neural-network sigmoid

新手在这里。刚刚从JS切换到Python以构建神经网络,但从中获得[Nan]输出。

奇怪的是我的乙状结肠功能。似乎没有遇到任何溢出,但是派生导致混乱。

import numpy as np

def sigmoid(x):
  return x*(1-x)
  return 1/(1 + np.exp(-x))

#The function- 2

def Sigmoid_Derivative(x):
    return x * (1-x)

Training_inputs = np.array([[0,0,1], 
                            [1,1,1], 
                            [1,0,1], 
                            [0,1,1]])

Training_outputs = np.array([[0, 1, 1, 0]]).T

np.random.seed(1)

synaptic_weights = np.random.random((3, 1)) - 1

print ("Random starting synaptic weight:")
print (synaptic_weights)

for iteration in range(20000):
  Input_Layer = Training_inputs

  Outputs = sigmoid(np.dot(Input_Layer, synaptic_weights)) 

  erorr = Training_outputs - Outputs

  adjustments = erorr * Sigmoid_Derivative(Outputs)

  synaptic_weights += np.dot(Input_Layer.T, adjustments)

# The print declaration----------  
print ("Synaptic weights after trainig:")
print (synaptic_weights)

print ("Outputs after training: ")
print (Outputs)

这是erorr消息。我不知道为什么它会溢出,因为权重似乎很小。当我是新手时,BTW 请使用简单的python提供解决方案:-

Random starting synaptic weight:
[[-0.582978  ]
 [-0.27967551]
 [-0.99988563]]
/home/neel/Documents/VS-Code_Projects/Machine_Lrn(PY)/tempCodeRunnerFile.py:10: RuntimeWarning: overflow encountered in multiply
  return x * (1-x)
Synaptic weights after trainig:
[[nan]
 [nan]
 [nan]]
Outputs after training: 
[[nan]
 [nan]
 [nan]
 [nan]]

1 个答案:

答案 0 :(得分:2)

您的代码至少有两个问题。

第一个是在return函数中莫名其妙地使用了2个sigmoid语句,它们应该简单地是:

def sigmoid(x):
  return 1/(1 + np.exp(-x))

x=0(0.5)给出正确的结果,而对于大的x则为1:

sigmoid(0)
# 0.5
sigmoid(20)
# 0.99999999793884631

您的(错误的)乙状结肠:

def your_sigmoid(x):
  return x*(1-x)
  return 1/(1 + np.exp(-x))

很容易导致溢出:

your_sigmoid(20)
# -380

另一个问题是您的派生词是错误的;应该是:

def Sigmoid_Derivative(x):
    return sigmoid(x) * (1-sigmoid(x))

请参见Math.SE的Derivative of sigmoid function主题以及讨论here