我在Python中创建了一个神经网络,我唯一支持的是NumPy。在训练神经网络时,我使用反向传播,直到我将网络实施到我的项目中,用于预测Rock-Paper-Scissors的游戏,并在从两个游戏中获取数据后,返回错误: Matrices在backprogagation部分中不要对齐。该程序本身可以在下面找到:
注意:有些变量是在此之外定义的,因为我只包含神经网络部分。
数据的原始定义等:
#input data: using MOCK RPS DATA, 0:ROCK, 0.5:PAPER, 1:SCISSORS - sigmoid only outputs values between 0 and 1
input_data = np.array([[0, 0, 0]])
current_turn = []
#also for training, .Transpose flips dimensions
output_data = np.array([[0]]).T
#set number of hidden nodes to be used in evolution
hiddenNode_quantity = 1
神经网络部分:
#sigmoid function converts numbers to percentages(between 0 and 1)
def nonlin(x, deriv = False):
if (deriv == True): #sigmoid derivative is just
return x*(1-x)#output * (output - 1)
return 1/(1+np.exp(-x)) #print the sigmoid function... sort of?
#random numbers help weights
np.random.seed(1)
#create random weights to be trained in loop
firstLayer_weights = 2 * np.random.random((3, hiddenNode_quantity)) - 1
secondLayer_weights = 2 * np.random.random((hiddenNode_quantity, 1)) - 1
for value in xrange(60000): #LEARN XRANGE, loops through training
#pass input through weights to output: three layers
layer0 = input_data
#layer1 takes dot product of the input and weight matrices, then maps them to sigmoid function
layer1 = nonlin(np.dot(layer0, firstLayer_weights))
#layer2 takes dot product of layer1 result and weight matrices, then maps the to sigmoid function
layer2 = nonlin(np.dot(layer1, secondLayer_weights))
#on last cycle, when weight are as adjusted as possible, put in the current situation
if value == 59999:
#add new situation, with what is currently happening to make current prediction with adjusted weights
current_turn = np.array([[input_data[len(input_data) - 1][1], input_data[len(input_data) - 1][2], output_data[len(output_data) - 1][0]]])
input_data = np.append(input_data, current_turn, axis = 0)
#increase size of input_data
hiddenNode_quantity += 1
#apply weights to new input data
layer0 = input_data
print layer0
layer1 = nonlin(np.dot(layer0, firstLayer_weights))
print layer1
layer2 = nonlin(np.dot(layer1, secondLayer_weights))
print layer2
print output_data
#based on computer's best prediction, decide the winning move
computer_prediction = layer2[len(layer2) - 1][0]
if computer_prediction < (1/3): #if computer predicts player move is rock, play paper
computer_choice = "p"
elif (computer_prediction > (1/3)) and (computer_prediction < (2/3)): #predict: paper, play: scissors
computer_choice = "s"
else:
computer_choice = "r"
else:
#check computer predicted result against actual data
layer2_error = output_data - layer2
#if value is a factor of 30,000, so two times (out of 60,000),
#print how far off the predicted value was from the data
if value % 30000 == 0:
print "Error:" + str(np.mean(np.abs(layer2_error))) #average error
#find out how much to re-adjust weights based on how far off and how confident the estimate
layer2_change = layer2_error * nonlin(layer2, deriv = True)
#find out how layer1 led to error in layer 2, to attack root of problem
layer1_error = layer2_change.dot(secondLayer_weights.T)
#^^sends error on layer2 backwards across weights(dividing) to find original error: BACKPROPAGATION
#same thing as layer2 change, change based on accuracy and confidence
layer1_change = layer1_error * nonlin(layer1, deriv=True)
#modify weights based on multiplication of error between two layers, the multiply by 5 is to accelerate search for global minima
secondLayer_weights += (5 * (layer1.T.dot(layer2_change))) #and get out of local minima
firstLayer_weights += (5 * (layer0.T.dot(layer1_change)))
#COMPUTER NEURAL NETWORK END
#human input selection
human_choice = raw_input("Human Choice: ")
while human_choice not in move_possibilities: #player picked an option other than "r", "p", "s"
human_choice = raw_input("Please select type 'r', 'p', or 's':")
#add human choice as new information to output data
if human_choice == "r":
output_data = np.append(output_data, ([0]))
elif human_choice == "p":
output_data = np.append(output_data, ([0.5]))
elif human_choice == "s":
output_data = np.append(output_data, ([1]))
&#34; for xrange&#34;部分涉及神经网络的训练以及在训练的最后一个循环中插入需要预测的值的部分。
但是,在中途分隔代码的粗体部分是绘制错误的原因。它包含两个简单的矩阵,这些矩阵在整个代码中使用,但第二次代码运行,对于要播放的下一个游戏,它会绘制一个错误,即&#34;矩阵未对齐&#34;,如下所示:
此错误已停止我的制作。 是否有任何可能的纠正措施来解决这个问题?