完全传播的层代码的反向传播

时间:2020-06-05 10:00:09

标签: python numpy matrix neural-network matrix-multiplication

大家好,我正在实现神经网络库。当我尝试对全连接层的反向传播进行编码时,偶然发现一个问题-当我在前馈过程中相乘时, 我“丢失”了输入尺寸。举例来说,如果我的输入形状是矩阵形状的10x640,而权重形状是10x640x1000,则加权的输出形状将是10x1000(这应该是预期的)。但是,这意味着当我尝试在反向传播过程中计算下一层(或上一层)误差时,我将无法无错误地进行乘法运算。这是我的代码:

# Feedforward
# input.shape = 10x640
# weights.shape = 10x640x1000
# z.shape = 10x1000
dots = []
for i in range(len(input)):
    dots.append(np.dot(input[i], weights[i]))
z = np.add(dots, biases)

# Backpropagation
# thisLayerError.shape = 10x1000
# activationDerivative.shape = 10x1000
# dots.shape = 10x640
activationDerivative = activationFunction.derivative(z) 
dots = []
for i in range(len(thisLayerError)):
    dots.append(np.dot(weights[i], thisLayerError[i])) 
nextLayerError = np.multiply(dots, activationDerviative) # this will throw an exception 

0 个答案:

没有答案