我正在尝试通过pytorch进行多项式回归。首先,我只尝试了线性回归(b + wx)。
model_1 = RegressionModel()
W = torch.zeros(1, requires_grad=True)
b = torch.zeros(1, requires_grad = True)
optimizer_1 = torch.optim.SGD([W, b], lr = 0.001)
x_train = torch.FloatTensor(dataset.x_data['LSTAT'])
y_train = torch.FloatTensor(dataset.data['target'])
nb_epochs = 10000
for epoch in range(nb_epochs + 1):
hypothesis = x_train * W + b
cost = torch.nn.functional.mse_loss(hypothesis, y_train.float())
optimizer_1.zero_grad()
cost.backward()
optimizer_1.step()
print('Epoch {:4d}/{} W: {:.3f}, b: {:.3f}, Cost: {:.6f}'.format(epoch,
nb_epochs, W.item(), b.item(), cost.item()))
然后我更改并添加了一些变量以进行多项式回归(b + w1x + w2x ^ 2)
model_2 = RegressionModel()
W1 = torch.zeros(1, requires_grad=True)
W2 = torch.zeros(1, requires_grad=True)
b = torch.zeros(1, requires_grad = True)
optimizer_2 = torch.optim.SGD([W2, W1, b], lr = 0.0000099)
x_train = torch.FloatTensor(dataset.x_data['LSTAT'])
y_train = torch.FloatTensor(dataset.data['target'])
nb_epochs = 10000
for epoch in range(nb_epochs + 1):
hypothesis = b + x_train * W1 + x_train * x_train * W2
cost = torch.nn.functional.mse_loss(hypothesis, y_train.float())
optimizer_2.zero_grad()
cost.backward()
optimizer_2.step()
print('Epoch {:4d}/{} W1: {:.3f}, W2: {:.3f}, b: {:.3f}, Cost:
{:.6f}'.format(epoch, nb_epochs, W1.item(), W2.item(), b.item(),
cost.item()))
我可以这样尝试多项式回归吗?如果没有,如果您让我知道,我将不胜感激。我真的不是菜鸟...
答案 0 :(得分:1)
您的代码应该可以使用。当使用较大的数据时,如果在单个矩阵运算中进行回归,则效率会更高。为此,您需要先预先计算输入特征的多项式:
x_train_polynomial = torch.stack([x_train, x_train ** 2], dim=1)
要保存一些线,可以将投影重写为线性层:
import torch.nn as nn
projection = nn.Linear(2, 1, bias=True)
在训练循环中,您可以致电:
hypothesis = projection(x_train_polynomial)