用Pytorch进行线性回归时损耗不会减少

时间:2020-07-12 15:31:29

标签: python machine-learning neural-network pytorch linear-regression

我正在用Pytorch处理线性回归问题。我使用的数据集是Kaggle的房屋价格。在训练模型时,我看到损失并未减少。它显示了一种不稳定的模式。这是我在100个时代之后所遭受的损失:

Epoch [10/100], Loss: 222273830912.0000
Epoch [20/100], Loss: 348813688832.0000
Epoch [30/100], Loss: 85658296320.0000
Epoch [40/100], Loss: 290305572864.0000
Epoch [50/100], Loss: 59399933952.0000
Epoch [60/100], Loss: 80360054784.0000
Epoch [70/100], Loss: 90352918528.0000
Epoch [80/100], Loss: 534457679872.0000
Epoch [90/100], Loss: 256064503808.0000
Epoch [100/100], Loss: 102400483328.0000

这是代码:

import torch
import numpy as np
from torch.utils.data import TensorDataset
import torch.nn as nn
from torch.utils.data import DataLoader
import torch.nn.functional as F

inputs = normalized_X
targets = np.array(train_y)

# Tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
targets = targets.view(-1, 1)
train_ds = TensorDataset(inputs, targets.squeeze())
batch_size = 5
train_dl = DataLoader(train_ds, batch_size, shuffle=True)

model = nn.Linear(10, 1)
# Define Loss func
loss_fn = F.mse_loss
# Optimizer
opt = torch.optim.SGD(model.parameters(), lr = 1e-1)


num_epochs = 100
model.train()
for epoch in range(num_epochs):
    # Train with batches of data
    for xb, yb in train_dl:

        # 1. Generate predictions
        pred = model(xb.float())

        # 2. Calculate loss
        yb = yb.view(yb.size(0), -1)
        loss = loss_fn(pred, yb.float())
    
        # 3. Compute gradients
        loss.backward()

        # 4. Update parameters using gradients
        opt.step()

        # 5. Reset the gradients to zero
        opt.zero_grad()

    if (epoch+1) % 10 == 0:
        print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch +
                                                   1, num_epochs, 
                                                   loss.item()))

2 个答案:

答案 0 :(得分:1)

我已经运行了您提供的代码,但出现此错误:

    p.py:38: UserWarning: Using a target size (torch.Size([50])) that is 
different to the input size (torch.Size([50, 1])). This will likely lead 
to incorrect results due to broadcasting. Please ensure they have the same size.

您的问题是由于predyb之间的尺寸不同。

此代码显示了解决方法

import torch
import numpy as np
from torch.utils.data import TensorDataset
import torch.nn as nn
from torch.utils.data import DataLoader
import torch.nn.functional as F

inputs = np.random.rand(50, 10)
targets = np.random.randint(0, 2, 50)

# Tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
train_ds = TensorDataset(inputs, targets.squeeze())
batch_size = 100
train_dl = DataLoader(train_ds, batch_size, shuffle=True)

model = nn.Linear(10, 1)
# Define Loss func
loss_fn = F.mse_loss
# Optimizer
opt = torch.optim.SGD(model.parameters(), lr = 1e-1)


num_epochs = 100
model.train()
for epoch in range(num_epochs):
    # Train with batches of data
    for xb, yb in train_dl:



# 1. Generate predictions
    pred = model(xb.float())

    # 2. Calculate loss
    yb = yb.view(yb.size(0), -1)
    loss = loss_fn(pred, yb.float())

    # 3. Compute gradients
    loss.backward()

    # 4. Update parameters using gradients
    opt.step()

    # 5. Reset the gradients to zero
    opt.zero_grad()

    if (epoch+1) % 10 == 0:
        print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch +
                                               1, num_epochs, 
                                               loss.item()))

此讨论向您详细显示 https://discuss.pytorch.org/t/target-size-torch-size-10-must-be-the-same-as-input-size-torch-size-2/72354/6

答案 1 :(得分:0)

我之前的评论无效,因此我将其删除。您的示例代码按预期工作。您要根据独立随机变量预测随机变量。没有模式,这就是为什么它不收敛。