具有 L1 正则化的逻辑回归模型

时间:2021-01-05 08:27:35

标签: python pytorch logistic-regression regularized

我正在尝试在逻辑模型上应用 L1 正则化

class LogisticRegression(nn.Module):


    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(input_size, num_classes)
    def forward(self, x):
        x = x.reshape(-1, 784)
        output = self.linear(x)
        return output

    def training_step(self, batch):
        images, labels = batch 
        output = self(images)                 
        loss = F.cross_entropy(output, labels)
    
    
        acc = accuracy(output, labels)           
        return {'Training_loss': loss, 'Training_acc': acc}
      
    def training_epoch_end(self, outputs):
        batch_losses = [x['Training_loss'] for x in outputs]
        epoch_loss = torch.stack(batch_losses).mean()   
        batch_accs = [x['Training_acc'] for x in outputs]
        epoch_acc = torch.stack(batch_accs).mean()     
        return {'Training_loss': epoch_loss.item(), 'Training_acc': epoch_acc.item()}

    def epoch_end(self, epoch, result):
        print("Epoch [{}], Training_loss: {:.4f}, Training_acc: {:.4f}".format(epoch, result['Training_loss'], result['Training_acc']))
model = LogisticRegression()

但我认为我做错了准确性没有改变。

L1=0.2
def evaluate(model_b, trainloader):
    outputs = [model_b.training_step(batch) for batch in trainloader]
    return model_b.training_epoch_end(outputs)

def fit(epochs, lr, model_b, trainloader, opt_func=torch.optim.SGD):
    history = []
    optimizer = opt_func(model_b.parameters(), lr)
    for epoch in range(epochs):
        ##### Training Phase 
        for batch in trainloader:
            loss = model_b.training_step(batch)['Training_loss']
            
            loss_Lasso = loss  + 0.5 * L1 #  L1 reg
            
            loss_Lasso.backward()
            optimizer.step()
            optimizer.zero_grad()
        result = evaluate_b(model_b, trainloader)
        model_b.epoch_end(epoch, result)
        history.append(result)
    return history

谁能帮助我解决我所缺少的以及如何真正应用 L1 正则化? 另外,L1 正则化是否称为 lasso?

1 个答案:

答案 0 :(得分:1)

我相信 l1-norm 是一种套索正则化,是的,但是 there are others

在您的代码段中,L1 被设置为常量,而您应该测量模型参数的 l1-norm。然后将其与您的网络损失相加,就像您所做的那样。在您的示例中,只有一个层,因此您只需要 self.linear 的参数。首先收集所有参数,然后使用 torch.norm 测量总范数。您也可以使用 nn.L1Loss

params = torch.cat([x.view(-1) for x in model.linear.parameters()])
L1 = lamb*torch.norm(params, p=1)

其中 lamb 是您的 lambda 正则化参数,model 是从 LogisticRegression 类初始化的。