简单的神经网络,在Python中将输入变量的总和作为输出?

时间:2017-05-04 18:02:45

标签: python neural-network

有些人可以制作一个简单的神经网络,将输入变量的总和作为输出。 例如,如果输入变量是X1,X2,X3,则输出为Y = X1 + X2 + X3。

使用矩阵乘法的简单Python程序会有所帮助。

谢谢。

这是我试图应用它的代码只是“iamtrask”代码的修改版本,但它没有给我正确的答案,并且当我增加测试用例(set_size)时往往会在[1.]处饱和

import numpy as np

outputs=[]

#initializinf hyper parameters
set_size=20
iterations=10000
input_variables=3

# sigmoid function
def nonlin(x, deriv=False):
    if (deriv == True):
        return 1 * (1 - x)
    return  1 / (1 + np.exp(-x))

#inverse of sigmoid, Logit function
def logit(x):
   return np.log(x/(1-x))

#initializing inputs with random values
inputs = 2 * np.random.random((set_size, input_variables)) - 1
X=np.array(inputs)

#Getting desired output using mathematical operations
for h in range(set_size):
    outputs.append(nonlin((X[h][0]) + (X[h][1]) + (X[h][2])))



# output dataset
y = np.array([outputs]).T # converting list into array and taking transpose

# seed random numbers to make calculation
# deterministic (just a good practice)
np.random.seed(1)

# initialize weights randomly with mean 0
syn0 = 2 * np.random.random((input_variables, set_size)) - 1
syn1 = 2 * np.random.random((set_size, 1)) - 1
print(y)

for iter in range(0,10000):
    # forward propagation
    l0 = X
    l1 = nonlin(np.dot(l0, syn0))
    l2 = nonlin(np.dot(l1, syn1))
    # how much did we miss?
    #l1_error = y - l1
    l2_error = y - l2
    #print(l1_error)

    l2_delta = l2_error * nonlin(l2, deriv=True)

    l1_error = l2_delta.dot(syn1.T)

    l1_delta = l1_error * nonlin(l1, deriv=True)
    # multiply how much we missed by the
    # slope of the sigmoid at the values in l1
    #l1_delta = l1_error * nonlin(l1, True)

    # update weights
    syn1 += l1.T.dot(l2_delta)
    syn0 += l0.T.dot(l1_delta)
print("Output After Training:")
#out=logit(l2)
print(l2)

#testing the trained network with new values
X1=input("Enter the new inputs:")
mynums = [float(i) for i in X1.split()]
#mynums = map(float, X1.split())
print(mynums)
l0 = mynums
l1 = nonlin(np.dot(l0, syn0))
l2 = nonlin(np.dot(l1, syn1))
print(l2)

2 个答案:

答案 0 :(得分:0)

“用于描述反向传播的内部工作的简单神经网络实现。” 11行代码!

std::atomic_int flag = 1;     
static void Test1() {
      while (flag) {
              sleep(2); 
       }    
}

http://iamtrask.github.io/2015/07/12/basic-python-network/

答案 1 :(得分:0)

我以这种方式修改了Trask的代码:

import numpy as np

# sigmoid function
def nonlin(x,deriv=False):
    if(deriv==True):
        return x*(1-x)
    return 1/(1+np.exp(-x))

# input dataset of 100 pairs of X1, X2 & X3 numbers lying between 1 and 3
X = np.random.randint(1,3,size=(100,3)).astype(int) 
#Rescaling them or Normalizing them
X = X/(3*3)    #(Max_element, No_of_imputs)
# output dataset            
y = np.sum(X, axis = 1, keepdims=True)
#Normalizing
y=y/(3*3)

#Initializing weights
np.random.seed(1)

# randomly initialize our weights with mean 0
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1

#Training
for iter in range(30000):

    # forward propagation
    l0 = X
    l1 = nonlinn(np.dot(l0,syn0))
    l2 = nonlinn(np.dot(l1,syn1))

    # how much did we miss?
    l2_error = y-l2

    #if (iter% 100) == 0:
    #    print ("Error:" + str(np.mean(np.abs(l2_error))))

    l2_delta = l2_error*nonlinn(l2, deriv=True)

    l1_error = l2_delta.dot(syn1.T)

    # multiply how much we missed by the 
    # slope of the sigmoid at the values in l1
    l1_delta = l1_error * nonlinn(l1,True)

    # update weights
    syn1 += l1.T.dot(l2_delta)
    syn0 += l0.T.dot(l1_delta)

#Predict
l0 = [[3,1,0]]    #Should give 4 as answer
l1 = nonlinn(np.dot(l0,syn0))
l2 = nonlinn(np.dot(l1,syn1))
print (l2*(3*3))  
#(3*3) will invert the effect of normalization

结果为4.157(相当准确)。 我认为问题出在归一化。