我正在尝试一个简单的多标签分类示例,但由于丢失停滞,网络似乎无法正确训练。
我已经按照pytorch文档的建议使用了multilabel_soft_margin_loss,但是没有太多其他内容了。.在文档中找不到任何合适的示例。
任何人都可以窥见到这一点并指出出什么问题吗?下面的完整示例(也在下面的预测问题上)
完整的示例代码
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
from sklearn.datasets import make_multilabel_classification
from torch.utils.data import TensorDataset, DataLoader
from sklearn.model_selection import train_test_split
import xgboost as xgb
from sklearn.metrics import accuracy_score
num_classes = 3
X, y = make_multilabel_classification(n_samples=1000,n_classes=num_classes)
X_tensor, y_tensor = torch.tensor(X), torch.tensor(y)
print("X Shape :{}".format(X_tensor.shape))
print("y Shape :{}".format(y_tensor.shape))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(X.shape[1], 300)
self.fc2 = nn.Linear(300, 10)
self.fc3 = nn.Linear(10, num_classes)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
device = torch.device("cpu")
lr = 1
batch_size = 128
gamma = 0.9
epochs = 100
args = {'log_interval': 10, 'dry_run':False}
kwargs = {'batch_size': batch_size}
kwargs.update({'num_workers': 1,
'pin_memory': True,
'shuffle': True},
)
model = Net().to(device)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=0.1)
scheduler = StepLR(optimizer, step_size=1, gamma=gamma)
# data loader
my_dataset = TensorDataset(X_tensor,y_tensor) # create tensor dataset
train_dataset, test_dataset, = train_test_split(
my_dataset, test_size=0.2, random_state=42)
train_loader = DataLoader(train_dataset,**kwargs)
test_loader = DataLoader(test_dataset,**kwargs)
## Train step ##
for epoch in range(1, epochs + 1):
model.train() # set model to train
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data.float())
loss = F.multilabel_soft_margin_loss(output,target)
loss.backward()
optimizer.step()
if batch_idx % args['log_interval'] == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
if args['dry_run']:
break
scheduler.step()
培训损失进度
Train Epoch: 1 [0/800 (0%)] Loss: 0.694400
Train Epoch: 2 [0/800 (0%)] Loss: 0.697095
Train Epoch: 3 [0/800 (0%)] Loss: 0.705593
Train Epoch: 4 [0/800 (0%)] Loss: 0.651981
Train Epoch: 5 [0/800 (0%)] Loss: 0.704895
Train Epoch: 6 [0/800 (0%)] Loss: 0.650302
Train Epoch: 7 [0/800 (0%)] Loss: 0.658809
Train Epoch: 8 [0/800 (0%)] Loss: 0.904834
Train Epoch: 9 [0/800 (0%)] Loss: 0.655516
Train Epoch: 10 [0/800 (0%)] Loss: 0.662808
Train Epoch: 11 [0/800 (0%)] Loss: 0.664752
Train Epoch: 12 [0/800 (0%)] Loss: 0.656390
Train Epoch: 13 [0/800 (0%)] Loss: 0.664982
Train Epoch: 14 [0/800 (0%)] Loss: 0.664430
Train Epoch: 15 [0/800 (0%)] Loss: 0.664603 # stagnates
最重要的是,我如何获得对此的预测?这与将argmax视为多标签问题不一样吗? (下面的网络示例输出) 输出
tensor([[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354]]
谢谢!
答案 0 :(得分:1)
最重要的是,我将如何获得对此的预测?
如果这是一个多标签任务,并且您正在按原样输出logit,则只需执行以下操作:
output = model(data.float())
labels = output > 0
指出它有什么问题吗?
这是艰难而又自以为是的,我会按顺序做什么:
sklearn
创建的数据)Adam
可以保留)。如果您的模型过度拟合,请使用权重衰减,显然现在还不是。1
可能太高了,从3e-4
或1e-3
之类的东西开始。~0.0
个样本)过度拟合(损失32
)。如果不能,则您的神经网络可能没有足够的容量,或者您的代码中有错误(除了上面我提到的内容之外,一眼无法发现它)。您应该手动验证输入和输出形状正确并返回值(似乎每个示例网络都返回相同的logit?)。我已经按照pytorch文档的建议使用了multilabel_soft_margin_loss,
这与使用torch.nn.BCEWithLogitsLoss
是同一件事,我认为这更常见,但这是附录。