一个人将如何使用Pytorch的Autograd对多个图层(矩阵)进行反向传播?

时间:2018-06-20 00:54:44

标签: neural-network pytorch backpropagation

只是为了学习,我试图从头开始修改此三层神经网络,以便仍然从头开始使用所有内容,除了我想使用Pytorch Autograd函数来计算增量和反向传递权重。具体来说,我只是想将预测的均方误差减去存储为“损失”的y并对其执行“ loss.backwards()”。
我用Autograd建立了成功的线性回归模型,但在这里不起作用

我尝试将图层和权重放在for循环的内部和外部。我已经尝试过将所有内容都添加到“ require_grad”中。
照原样,这是我当前的Python3.6错误

Traceback (most recent call last):
  File "nn_scratch_trash.py", line 96, in <module>
    layer_2.data -= layer_2.grad.data
AttributeError: 'NoneType' object has no attribute 'data'  

代码...

import numpy as np
import random
import torch

np.random.seed(111)

X = np.random.randn(6,4)
y = np.random.randn(6,1)


def zeround(x):
    x = 1 / ( 1 + (np.exp(-x)))
    x = np.around(x)
    return x

# Create Random X and y arrays of random 1s and 0s with the zeround function I made.
y = zeround(y)
X = zeround(X)

X, y = V(X), V(y)

# randomly initialize our weights
weight_0 = V(np.random.randn(4,6), requires_grad=True) - 1
weight_1 = V(np.random.randn(6,7), requires_grad=True) - 1
weight_2 = V(np.random.randn(7,1), requires_grad=True) - 1

# Feed forward through layers 0, 1, and 2
layer_1 = V(((X @ weight_0)), requires_grad=True)
layer_2 = V(((layer_1 @ weight_1)), requires_grad=True)
layer_3 = V(((layer_2 @ weight_2)), requires_grad=True)

print((layer_1))
print((layer_2))
print((layer_3))
print(y)

for j in range(10000):
    # torch backprop
    loss = ((layer_3 - y))
    loss.backward()

    print(weight_2.requires_grad)

    # UPDATE WEIGHTS
    weight_0.data -= weight_0.grad.data
    weight_1.data -= weight_1.grad.data
    weight_2.data -= weight_2.grad.data

    weight_0.grad.data.zero_()
    weight_1.grad.data.zero_()
    weight_2.zero_()

    if j == 9999:
        print (loss.item())

0 个答案:

没有答案