我试图用一个隐藏层在python中建立一个XOR神经网络,但是我遇到了尺寸问题,我不知道为什么首先要弄错尺寸,因为数学对我来说看起来正确。
尺寸问题始于反向传播部分,并已发表评论。该错误具体是
File "nn.py", line 52, in <module>
d_a1_d_W1 = inp * deriv_sigmoid(z1)
File "/usr/local/lib/python3.7/site-packages/numpy/matrixlib/defmatrix.py", line 220, in __mul__
return N.dot(self, asmatrix(other))
ValueError: shapes (1,2) and (3,1) not aligned: 2 (dim 1) != 3 (dim 0)
另外,为什么这里的sigmoid_derivative函数仅在我强制转换为numpy数组时才起作用?
代码:
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def deriv_sigmoid(x):
fx = np.array(sigmoid(x)) # gives dimensions issues unless I cast to array
return fx * (1 - fx)
hiddenNeurons = 3
outputNeurons = 1
inputNeurons = 2
X = np.array( [ [0, 1] ])
elem = np.matrix(X[0])
elem_row, elem_col = elem.shape
y = np.matrix([1])
W1 = np.random.rand(hiddenNeurons, elem_col)
b1 = np.random.rand(hiddenNeurons, 1)
W2 = np.random.rand(outputNeurons, hiddenNeurons)
b2 = np.random.rand(outputNeurons, 1)
lr = .01
for inp, ytrue in zip(X, y):
inp = np.matrix(inp)
# feedforward
z1 = W1 * inp.T + b1 # get weight matrix1 * inputs + bias1
a1 = sigmoid(z1) # get activation of hidden layer
z2 = W2 * a1 + b2 # get weight matrix2 * activated hidden + bias 2
a2 = sigmoid(z2) # get activated output
ypred = a2 # and call it ypred (y prediction)
# backprop
d_L_d_ypred = -2 * (ytrue - ypred) # derivative of mean squared error loss
d_ypred_d_W2 = a1 * deriv_sigmoid(z2) # deriviative of y prediction with respect to weight matrix 2
d_ypred_d_b2 = deriv_sigmoid(z2) # deriviative of y prediction with respect to bias 2
d_ypred_d_a1 = W2 * deriv_sigmoid(z2) # deriviative of y prediction with respect to hidden activation
d_a1_d_W1 = inp * deriv_sigmoid(z1) # dimensions issue starts here ––––––––––––––––––––––––––––––––
d_a1_d_b1 = deriv_sigmoid(b1)
W1 -= lr * d_L_d_ypred * d_ypred_d_a1 * d_a1_d_W1
b1 -= lr * d_L_d_ypred * d_ypred_d_a1 * d_a1_d_b1
W2 -= lr * d_L_d_ypred * d_ypred_d_W2
b2 -= lr * d_L_d_ypred * d_ypred_d_b2
答案 0 :(得分:1)
我从未尝试过使用神经网络。所以我不完全了解您要做什么。
我猜想如果a和b是矩阵而不是numpy数组,那么a * b
的工作方式会有些混乱。在numpy数组上*按元素进行乘法,在np.matrices上进行矩阵乘法。
a=np.array([[1,2],[3,4]])
b = a-1
print(b)
# array([[0, 1],
# [2, 3]])
a*b # Element wise multiplication
# array([[ 0, 2], [[ 1*0, 2*1 ],
# [ 6, 12]]) [ 3*2, 4*3 ]]
am = np.matrix(a)
bm = np.matrix(b)
am * bm # Matrix (dot) multiplication
# matrix([[ 4, 7], [[ 0*1+1*2, 1*1+2*3],
# [ 8, 15]]) [ 1*2+2*3, 3*1+4*3]]
在deriv_sigmoid函数中(无np.array),如果x是矩阵,则fx是具有相同形状(3,1)的矩阵。当fx是(3,1)矩阵时,fx * (1-fx)
引发了一个异常,因为两个(3,1)矩阵不能相乘。
相同的问题适用于代码的“#backprop”部分。
d_ypred_d_a1 = W2 * deriv_sigmoid(z2) # deriviative of y prediction with respect to hidden activation
# W2 * deriv_sigmoid(z2) fails as shapes are incompatible with matrix multiplication.
# deriv_sigmoid(z2) * W2 would work, but I guess would return incorrect values (and shape).
d_a1_d_W1 = inp * deriv_sigmoid(z1)
# This fails for the same reason. The shapes of ing and z1 are incompatible.
除非您需要矩阵乘法,否则我认为使用np.arrays将使编程更加容易。