PyTorch中特定于参数的学习率

时间:2019-11-24 01:28:31

标签: deep-learning pytorch gradient-descent

如何为网络中的每个特定参数(权重和偏差)设置学习率?

PyTorch's docs上,我发现了这一点:

optim.SGD([{'params': model.base.parameters()}, 
           {'params': model.classifier.parameters(), 'lr': 1e-3}], 
           lr=1e-2, momentum=0.9)

其中定义一组参数的model.classifier.parameters()获得的特定学习率为1e-3。

但是如何将其转换为参数级别?

1 个答案:

答案 0 :(得分:2)

您可以使用参数名称来设置特定于参数的学习速率,例如,设置学习速率。

对于来自PyTorch forum的给定网络:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.layer1 = nn.Linear(1, 1)
        self.layer1.weight.data.fill_(1)
        self.layer1.bias.data.fill_(1)
        self.layer2 = nn.Linear(1, 1)
        self.layer2.weight.data.fill_(1)
        self.layer2.bias.data.fill_(1)

    def forward(self, x):
        x = self.layer1(x)
        return self.layer2(x)

net = Net()
for name, param in net.named_parameters():
    print(name)

参数为:

layer1.weight
layer1.bias
layer2.weight
layer2.bias

然后,您可以使用参数名称来设置其特定学习率,如下所示:

optimizer = optim.Adam([
            {'params': net.layer1.weight},
            {'params': net.layer1.bias, 'lr': 0.01},
            {'params': net.layer2.weight, 'lr': 0.001}
        ], lr=0.1, weight_decay=0.0001)

out = net(torch.Tensor([[1]]))
out.backward()
optimizer.step()
print("weight", net.layer1.weight.data.numpy(), "grad", net.layer1.weight.grad.data.numpy())
print("bias", net.layer1.bias.data.numpy(), "grad", net.layer1.bias.grad.data.numpy())
print("weight", net.layer2.weight.data.numpy(), "grad", net.layer2.weight.grad.data.numpy())
print("bias", net.layer2.bias.data.numpy(), "grad", net.layer2.bias.grad.data.numpy())

输出:

weight [[0.9]] grad [[1.0001]]
bias [0.99] grad [1.0001]
weight [[0.999]] grad [[2.0001]]
bias [1.] grad [1.]