简单神经网络中的SyntaxError

时间:2018-01-27 00:03:58

标签: python neural-network

代码中的第57行: layer2_delta = layer2_error * nonlin(layer2,deriv = True) 结果为SyntaxError:语法无效

我已多次检查代码,但在我的生活中找不到为什么我会收到语法错误。它必须在我的nonlin函数调用的实现中,但我也看不到任何问题。我迷路了

#Simple neural network example

import numpy as np


# If deriv flag is False then this function returns the sigmoid function of 
    x.
# If deriv flag is passed in as True, then it calculates the derivative of 
    the function
def nonlin(x, deriv = False):
    if deriv == True:
        return(x*(1-x))
    return 1/(1 + np.exp(-x))

# Input data as an array
# The last column in the array is always "1" for accommodating the bias term
# This simple network only has two real input nodes plus one input bias node
X = np.array([[1,1,1],
              [3,3,3],
              [2,2,2],
              [2,2,2]])

#output data
y = np.array([[1],
              [1],
              [0],
              [1]])

# The seed for the random generator is set so that it will return the same 
    random
#  numbers each time re-running the script, which is sometimes useful for 
    debugging.
np.random.seed(1)

# Now we intialize the weights to random values. syn0 are the weights 
    between the input
# layer and the hidden layer. It is a 3x4 matrix because there are two input 
    weights
# plus a bias term (=3) and four nodes in the hidden layer (=4). syn1 are 
    the weights
# between the hidden layer and the output layer. It is a 4x1 matrix because 
    there are
# 4 nodes in the hidden layer and one output. Note that there is no bias 
    term feeding
# the output layer in this example. The weights are initially generated 
    randomly because
# optimization tends not to work well when all the weights start at the same 
    value.

# synapses
syn0 = 2 * np.random.random((3,4)) - 1    #3x4 matrix of weights(2 inputs +1 
bias) x 4 nodes in the hidden layer)
syn1 = 2 * np.random.random((4,1)) - 1    #4x1 matrix of weights(4 nodes x 1 
    output)

# Now we start training the network
# Starts with forward propogration
for j in range(60000):
    layer0 = X
    layer1 = nonlin(np.dot(layer0, syn0))
    layer2 = nonlin(np.dot(layer1, syn1))

    # Back propogation of errors
    layer2_error = y - layer2

    if(j % 10000) == 0:   #print error value after every 10000 interations
        print("Error:  " + str(np.mean(np.abs(layer2_error)))

    layer2_delta = layer2_error * nonlin(layer2, deriv=True)
    layer1_error = layer2_delta.dot(syn1.T)
    layer1_delta = layer1_error * nonlin(layer1,deriv=True)

    #update weights (no learning rate term)
    syn1 += layer1.T.dot(layer2_delta)
    syn0 += layer0.T.dot(layer1_delta)

print(Output after training)
print(layer2)

1 个答案:

答案 0 :(得分:0)

如果我计算正确,前一行中缺少右括号。