我有一个非常奇怪的问题,我已经用类似的代码进行了线性回归,但是当我在Torch中将代码更改为多项式回归时,损失归功于NaN,我在做什么错了?
x = np.linspace(-10,10,100)
y = x**2+3.2*x+2.4+np.random.normal(scale=6,size=100)
x = torch.from_numpy(np.float32(x))
y = torch.from_numpy(np.float32(y))
a = torch.autograd.Variable(torch.FloatTensor(1), requires_grad=True)
b = torch.autograd.Variable(torch.FloatTensor(1), requires_grad=True)
c = torch.autograd.Variable(torch.FloatTensor(1), requires_grad=True)
def model(x):
return a*x**2+b*x+c
criterion = torch.nn.MSELoss()
l_rate = 0.01
optimiser = torch.optim.SGD([a,b], lr = l_rate)
epochs = 300
for epoch in range(epochs):
optimiser.zero_grad()
outputs = model(x)
loss = criterion(outputs, y)
loss.backward()
optimiser.step()
if epoch%25==0:
print('epoch {}, loss {}'.format(epoch,loss.data.numpy()))
我的输出:
epoch 0, loss 2646.07666015625
epoch 25, loss nan
epoch 50, loss nan
epoch 75, loss nan
epoch 100, loss nan
epoch 125, loss nan
epoch 150, loss nan
epoch 175, loss nan
epoch 200, loss nan
epoch 225, loss nan
epoch 250, loss nan
epoch 275, loss nan