如何在我的损失函数中添加L2正则化项

时间:2018-05-03 07:34:54

标签: python pytorch

我要比较有正规化和无正则化之间的区别,所以我想自定义两个损失函数。

我的损失函数与L2规范:

enter image description here

###NET  
class CNN(nn.Module):
def __init__(self):
    super(CNN,self).__init__()
    self.layer1 = nn.Sequential(
        nn.Conv2d(3, 16, kernel_size = 5, padding=2),
        nn.ReLU(),
        nn.MaxPool2d(2))
    self.layer2 = nn.Sequential(
        nn.Conv2d(16, 32, kernel_size = 5, padding=2),
        nn.ReLU(),
        nn.MaxPool2d(2))
    self.layer3 = nn.Sequential(
        nn.Conv2d(32, 32, kernel_size = 5, padding=2),
        nn.ReLU(),
        nn.MaxPool2d(4))
    self.fc = nn.Linear(32*32*32,11)
def forward(self, x):
    out = self.layer1(x)
    out = self.layer2(out)
    out = self.layer3(out)
    out = out.view(out.size(0), -1)
    out = self.fc(out)
    return out

net = CNN()

###OPTIMIZER
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr = LR, momentum = MOMENTUM)
  

1.如何在我的损失函数中添加L2范数?

     

2.如果我想自己编写损失函数(不使用optim.SGD)并通过autograd执行grad-decent,我该怎么办?

感谢您的帮助!

1 个答案:

答案 0 :(得分:1)

您可以自己显式计算权重的范数,并将其添加到损失中。

reg = 0
for param in CNN.parameters():
  reg += 0.5 * (param ** 2).sum()  # you can replace it with abs().sum() to get L1 regularization
loss = criterion(CNN(x), y) + reg_lambda * reg  # make the regularization part of the loss
loss.backward()  # continue as usuall

有关更多信息,请参见this thread