计算纯python中NN的梯度

时间:2021-03-14 19:59:07

标签: python numpy machine-learning neural-network backpropagation

import numpy

# Data and parameters

X  = numpy.array([[-1.086,  0.997,  0.283, -1.506]])
T  = numpy.array([[-0.579]])
W1 = numpy.array([[-0.339, -0.047,  0.746, -0.319, -0.222, -0.217],
                      [ 1.103,  1.093,  0.502,  0.193,  0.369,  0.745],
                      [-0.468,  0.588, -0.627, -0.319,  0.454, -0.714],
                      [-0.070, -0.431, -0.128, -1.399, -0.886, -0.350]])
W2 = numpy.array([[ 0.379, -0.071,  0.001,  0.281, -0.359,  0.116],
                      [-0.329, -0.705, -0.160,  0.234,  0.138, -0.005],
                      [ 0.977,  0.169,  0.400,  0.914, -0.528, -0.424],
                      [ 0.712, -0.326,  0.012,  0.437,  0.364,  0.716],
                      [ 0.611,  0.437, -0.315,  0.325,  0.128, -0.541],
                      [ 0.579,  0.330,  0.019, -0.095, -0.489,  0.081]])
W3 = numpy.array([[ 0.191, -0.339,  0.474, -0.448, -0.867,  0.424],
                      [-0.165, -0.051, -0.342, -0.656,  0.512, -0.281],
                      [ 0.678,  0.330, -0.128, -0.443, -0.299, -0.495],
                      [ 0.852,  0.067,  0.470, -0.517,  0.074,  0.481],
                      [-0.137,  0.421, -0.443, -0.557,  0.155, -0.155],
                      [ 0.262, -0.807,  0.291,  1.061, -0.010,  0.014]])
W4 = numpy.array([[ 0.073],
                      [-0.760],
                      [ 0.174],
                      [-0.655],
                      [-0.175],
                      [ 0.507]])
B1 = numpy.array([-0.760,  0.174, -0.655, -0.175,  0.507, -0.300])
B2 = numpy.array([ 0.205,  0.413,  0.114, -0.560, -0.136,  0.800])
B3 = numpy.array([-0.827, -0.113, -0.225,  0.049,  0.305,  0.657])
B4 = numpy.array([-0.270])

# Forward pass

Z1 = X.dot(W[0])+B[0]
A1 = numpy.maximum(0,Z1)
Z2 = A1.dot(W[1])+B[1]
A2 = numpy.maximum(0,Z2)
Z3 = A2.dot(W[2])+B[2]
A3 = numpy.maximum(0,Z3)
Y  = A3.dot(W[3])+B[3];

# Error

err = ((Y-T)**2).mean()

鉴于这个例子,我想实现反向传播,并获得关于权重和偏置参数的梯度。显然,最后一层的梯度如下:

DY = 2*(Y-T)
DB4 = DY.mean(axis=0)
DW4 = A3.T.dot(DY) / len(X)
DZ3 = DY.dot(W4.T)*(Z3 > 0)

我确实知道使用链式法则计算不同的导数,但我不太明白您是如何得出这个解决方案的。

2 个答案:

答案 0 :(得分:1)

例如,DYerrY 的导数,所以

d/dY (Y - T)**2 == 2 * (Y - T)

这是一个普通的旧衍生品,尚无链式法则。

看起来像 DB4,使用链式法则:

d/dB[3] err == d/dB[3] (A3 @ W[3] + B[3] - T)**2
== 2 * (A3 @ W[3] + B[3] - T) * d/dB[3] (A3 @ W[3] + B[3] - T)
== 2 * (A3 @ W[3] + B[3] - T) * 1
== 2 * (Y - T)
== DY

DW4 是:

d/dW[3] err == d/dW[3] (A3 @ W[3] + B[3] - T)**2
== 2 * (A3 @ W[3] + B[3] - T) @ (d/dW[3] (A3 @ W[3] + B[3] - T))
== 2 * (Y - T) @ A3.T
[must match matrix shape]
== A3.T @ DY

A3.T @ DY 的诀窍在于 d/dW[3] (A3 @ W[3]) = A3.Thttps://math.stackexchange.com/questions/1866757/not-understanding-derivative-of-a-matrix-matrix-product

为了在计算A3时通过DZ3 == d/dZ3 err区分,应该考虑激活函数(TBH,我认为Y = A3.dot(W[3])+B[3]应该是Y = numpy.maximum(0, A3.dot(W[3])+B[3]),因为最终输出应该是激活函数的结果,但也许您的网络架构没有这样做),在您的情况下是 ReLU

答案 1 :(得分:1)

让我们使用(偏)导数的链式法则和矩阵微分法则,参考下图显示了神经网络的最后一个隐藏层,用于回归(MSE)误差的反向传播:

enter image description here

<块引用>

E = err = (Y - T)**2(对批次取平均值来计算 MSE)

<块引用>

DY = ∂E/∂Y = 2 * (Y - T)

<块引用>

∂E/∂W3 = (∂E/∂Y).(∂Y/∂W3)
= DY。 (∂/∂W3 (A3.W3+B3)) = DY.A3.T

= A3.T.DY (对训练批次 X 中的所有训练样本取平均值:求和除以批次大小 |X|)

<块引用>

∂E/∂B3 = (∂E/∂Y).(∂Y/∂B3)
= DY。 (∂/∂B3 (A3.W3+B3)) = DY.1

= DY(对批次中的所有示例取平均值)

<块引用>

∂E/∂Z3
= (∂E/∂Y).(∂Y/∂A3).(∂A3/∂Z3)

= DY.(∂/∂A​​3 (A3.W3+B3)).(1.?{Z3>0} + 0.?{Z3 <= 0})

= DY。 W3.T. ?{Z3 > 0),其中?(.) 是指标函数。使用 非线性 RELU 激活的定义,导数为 1 时 Z3>0,否则为0。