我正在建立我的第一个神经网络。尽管令人鼓舞的是,我获得了95-98%左右的准确性。从梯度检查中发现,第二层theta(参数)的导数与数值梯度检查(最大差为0.9。)相差甚远。我的输入是来自sklearn load_digits的数字的8X8图像。
下面是python代码段。
#forward propagation
a1 = x #(1797, 64)
a1 = np.column_stack((np.ones(m,),a1)) #(1797,65)
a2 = expit(a1.dot(theta1)) #(1797,100)
a2 = np.column_stack((np.ones(m,),a2)) #(1797,101)
a3 = expit(a2.dot(theta2)) #(1797,10)
a3[a3==1] = 0.999999 #to avoid log(1)
res1 = np.multiply(outputs,np.log(a3)) #(1797,10) .* (1797,10)
res2 = np.multiply(1-outputs,np.log(1-a3))
lamda = 0.5
cost = (-1/m)*(res1+res2).sum(axis=1).sum() + lamda/(2*m)*(np.square(theta1[1:,:]).sum(axis=1).sum() + np.square(theta2[1:,:]).sum(axis=1).sum())
反向传播代码:
#Back propagation
delta3 = a3 - outputs
delta2 = np.multiply(delta3.dot(theta2.T),np.multiply(a2,1-a2)) #(1797,10) * (10,101) = (1797,101)
D1 = (a1.T.dot(delta2[:,1:])) #(65, 1797) * (1797,100) = (65,100)
D1[0,:] = 1/m * D1[0,:]
D1[1:,:] = 1/m * (D1[1:,:] + lamda*theta1[1:,:])
D2 = (a2.T.dot(delta3)) #(101,1797) * (1797, 10) = (101,10)
D2[0,:] = 1/m * D2[0,:]
D2[1:,:] = 1/m * (D2[1:,:] + lamda*theta2[1:,:]) #something wrong in D2 calculation steps...
#print(theta1.shape,theta2.shape,D1.shape,D2.shape)
#this is what is returned by cost function
return cost,np.concatenate((np.asarray(D1).flatten(),np.asarray(D2).flatten())) #last 1010 wrong values
如您所见,渐变是平坦的。当我使用数值梯度检查时,我发现前6500个数字非常接近'D1',最大差= 1.0814544260334766e-07。但是与D2对应的最后1010个项目的最大差值为0.9。下面是渐变检查代码:
print("Checking gradient:")
c,grad = cost(np.concatenate((np.asarray(theta1).flatten(),np.asarray(theta2).flatten())),x_tr,y_tr,theta1.shape,theta2.shape)
grad_approx = checkGrad(x_tr,y_tr,theta1,theta2)
print("Non zero in grad",np.count_nonzero(grad),np.count_nonzero(grad_approx))
tup_grad = np.nonzero(grad)
print("Original\n",grad[tup_grad[0][0:20]])
print("Numerical\n",grad_approx[tup_grad[0][0:20]])
wrong_grads = np.abs(grad-grad_approx)>0.1
print("Max diff:",np.abs(grad-grad_approx).max(),np.count_nonzero(wrong_grads),np.abs(grad-grad_approx)[0:6500].max())
print(np.squeeze(np.asarray(grad[wrong_grads]))[0:20])
print(np.squeeze(np.asarray(grad_approx[wrong_grads]))[0:20])
where_tup = np.where(wrong_grads)
print(where_tup[0][0:5],where_tup[0][-5:])
检查Grad功能:
def checkGrad(x,y,theta1,theta2):
eps = 0.0000001 #0.00001
theta = np.concatenate((np.asarray(theta1).flatten(),np.asarray(theta2).flatten()))
gradApprox = np.zeros((len(theta,)))
thetaPlus = np.copy(theta)
thetaMinus = np.copy(theta)
print("Total iterations to be made",len(theta))
for i in range(len(theta)):
if(i % 100 == 0):
print("iteration",i)
if(i != 0):
thetaPlus[i-1] = thetaPlus[i-1]-eps
thetaMinus[i-1] = thetaMinus[i-1]+eps
thetaPlus[i] = theta[i]+eps
thetaMinus[i] = theta[i]-eps
cost1,grad1 = cost(thetaPlus,x,y,theta1.shape,theta2.shape)
cost2,grad2 = cost(thetaMinus,x,y,theta1.shape,theta2.shape)
gradApprox[i] = (cost1 - cost2)/(2*eps)
return gradApprox
我相信我在犯一些菜鸟错误。我意识到可能要经过很多代码。但是对于在该领域有经验的人,可能会在我做错事情时提出一些建议。
编辑:进一步明确:我使用checkGrad函数来验证我使用反向传播器为参数theta1和theta2计算的导数(D1和D2)正确。 “ lamda”(典型值)是正则化常数0.5。 Expit是numpy的S型函数。
答案 0 :(得分:0)
常规方法
如果这是您的第一个NN,那么我建议您使用网络本身来计算梯度:
HashMap<String, String> hash_addd = new HashMap<String, String>();
这里有一个简单的示例
from copy import deepcopy as dc
def gradient(net, weights, dx=0.01):
der = []
# compute partial derivatives
for i in range(len(weights)):
w = dc(weights) # dc is needed
w[i] += dx
der.append((net(w) - net(weights)) / dx) # dy/dx
return der
额外提示:请务必在需要时使用Deepcopy,这花了我很多时间弄清楚!