Pytorch不更新.step()

时间:2019-03-31 19:46:44

标签: regression linear-regression pytorch prediction

我正在尝试将旧代码转换为PyTorch代码作为实验。最终,我将在10,000+ x 100矩阵上进行回归,更新权重和其他不合适的方法。

尝试学习,我正在逐步扩大玩具示例。我碰到了下面的示例代码。

import torch 
import torch.nn as nn 
import torch.nn.functional as funct  
from torch.autograd import Variable 



device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 

x_data = Variable( torch.Tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ] ), 
requires_grad=True )
y_data = Variable( torch.Tensor( [ [2.0], [4.0], [6.0] ] ) ) 

w = Variable( torch.randn( 2, 1, requires_grad=True ) )

b = Variable( torch.randn( 1, 1, requires_grad=True ) )


class Model(torch.nn.Module) :
    def __init__(self) :
        super( Model, self).__init__()
        self.linear = torch.nn.Linear(2,1) ## 2 features per entry. 1 output
    def forward(self, x2, w2, b2) :
        y_pred = x2 @ w2 + b2
        return y_pred


model = Model()

criterion = torch.nn.MSELoss( size_average=False )
optimizer = torch.optim.SGD( model.parameters(), lr=0.01 )

for epoch in range(10) :
    y_pred = model( x_data,w,b ) # Get prediction
    loss = criterion( y_pred, y_data ) # Calc loss
    print( epoch, loss.data.item() ) # Print loss
    optimizer.zero_grad() # Zero gradient 
    loss.backward() # Calculate gradients
    optimizer.step() # Update w, b

但是,这样做,我的损失总是相同的,调查表明我的w和b从未真正改变。我对这里发生的事情有点迷茫。

最终,我希望能够存储“新” w和b的结果以在迭代和数据集之间进行比较。

1 个答案:

答案 0 :(得分:2)

在我看来,这是货运编程的情况。

请注意,您的Model类未使用self中的forward,因此它实际上是“常规”(非方法)函数,而{{1} }完全是无状态的。您的代码最简单的解决方法是将model创建为optimizer,以使w意识到boptimizer = torch.optim.SGD([w, b], lr=0.01)。我也将model重写为函数

import torch
import torch.nn as nn
# torch.autograd.Variable is roughly equivalent to requires_grad=True
# and is deprecated in PyTorch 1.0

# your code gives not reason to have `requires_grad=True` on `x_data`
x_data = torch.tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ])
y_data = torch.tensor( [ [2.0], [4.0], [6.0] ] )

w = torch.randn( 2, 1, requires_grad=True )
b = torch.randn( 1, 1, requires_grad=True )

def model(x2, w2, b2):
    return x2 @ w2 + b2

criterion = torch.nn.MSELoss( size_average=False )
optimizer = torch.optim.SGD([w, b], lr=0.01 )

for epoch in range(10) :
    y_pred = model( x_data,w,b )
    loss = criterion( y_pred, y_data )
    print( epoch, loss.data.item() )
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

话虽这么说,nn.Linear的创建是为了简化此过程。它会自动创建wb的等效项,分别称为self.weightself.bias。此外,self.__call__(x)等效于Model的正向定义,因为它返回self.weight @ x + self.bias。换句话说,您也可以使用替代代码

import torch
import torch.nn as nn

x_data = torch.tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ] )
y_data = torch.tensor( [ [2.0], [4.0], [6.0] ] )

model = nn.Linear(2, 1)

criterion = torch.nn.MSELoss( size_average=False )
optimizer = torch.optim.SGD(model.parameters(), lr=0.01 )

for epoch in range(10) :
    y_pred = model(x_data)
    loss = criterion( y_pred, y_data )
    print( epoch, loss.data.item() )
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

其中model.parameters()可用于枚举模型参数(等同于上面的手动创建的列表[w, b])。要访问参数(加载,保存,打印等),请使用model.weightmodel.bias