如何使用pytorch有效地计算大型数据集中每个示例的梯度?

时间:2019-08-06 17:00:52

标签: neural-network pytorch gradient

考虑到训练有素的模型(M),我对计算池中的新(看不见的)示例的效用感兴趣(用于主动学习任务)。为此,当在每个新示例上训练M时,我需要计算梯度的大小。在代码中,它类似于:

losses, grads = [], []
for i in range(X_pool.shape[0]):
    pred = model(X_pool[i:i+1])
    loss = loss_func(pred, y_pool[i:i+1])

    model.zero_grad()
    loss.backward()

    losses.append(loss)
    grads.append(layer.weight.grad.norm())

但是,当有大量示例时,这非常慢,尤其是因为这将是我的场景中的内部循环。有没有办法在pytorch中更有效地做到这一点?

1 个答案:

答案 0 :(得分:0)

根据代码,看起来您只在查看模型中一层的渐变。您可以将此层拆分为多个副本,每个副本仅占用一批的一个组件。这样,仅针对该特定样品计算梯度,而在其他任何地方都使用批处理。

下面是一个比较完整的示例,将您的方法(方法1)与我建议的方法(方法2)进行了比较。这应该可以轻松扩展到更复杂的网络。

import torch
import torch.nn as nn
import copy

batch_size = 50
num_classes = 10

class SimpleModel(nn.Module):
    def __init__(self, num_classes):
        super(SimpleModel, self).__init__()
        # input 3x10x10
        self.conv1 = nn.Conv2d(3, 10, kernel_size=3, padding=1, bias=False)
        # 10x10x10
        self.conv2 = nn.Conv2d(10, 20, kernel_size=3, stride=2, padding=1, bias=False)
        # 20x5x5
        self.fc = nn.Linear(20*5*5, num_classes, bias=False)

    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.shape[0], -1)
        return self.fc(x)

def method1(model, X_pool, y_pool):
    loss_func = nn.CrossEntropyLoss()
    layer = model.conv2

    losses, grads = [], []
    for i in range(X_pool.shape[0]):
        pred = model(X_pool[i:i+1])
        loss = loss_func(pred, y_pool[i:i+1])

        model.zero_grad()
        loss.backward()

        losses.append(loss)
        grads.append(layer.weight.grad.norm())
    return losses, grads


def method2(model, X_pool, y_pool):
    class Replicated(nn.Module):
        """ Instead of running a batch through one layer, run individuals through copies of layer """
        def __init__(self, layer, batch_size):
            super(Replicated, self).__init__()
            self.batch_size = batch_size
            self.layers = [copy.deepcopy(layer) for _ in range(batch_size)]

        def forward(self, x):
            assert x.shape[0] <= self.batch_size
            return torch.stack([self.layers[idx](x[idx:idx+1, :]) for idx in range(x.shape[0])])

    # compute individual loss functions so we can return them
    loss_func = nn.CrossEntropyLoss(reduction='none')

    # replace layer in model with replicated layer
    layer = model.conv2
    model.conv2 = Replicated(layer, batch_size)
    layers = model.conv2.layers

    # batch of predictions
    pred = model(X_pool)
    losses = loss_func(pred, y_pool)
    # reduce with sum so that the individual loss terms aren't scaled (like with mean) which would also scale the gradients
    loss = torch.sum(losses)
    model.zero_grad()
    loss.backward()
    # gradients of each layer scaled by batch_size to match original
    grads = [layers[idx].weight.grad.norm() for idx in range(X_pool.shape[0])]

    # convert to list of tensors to match method1 output
    losses = [l for l in losses]

    # put original layer back
    model.conv2 = layer
    return losses, grads


model = SimpleModel(num_classes)
X_pool = torch.rand(batch_size, 3, 10, 10)
y_pool = torch.randint(0, num_classes, (batch_size,))

losses2, grads2 = method2(model, X_pool, y_pool)
losses1, grads1 = method1(model, X_pool, y_pool)

print("Losses Diff:", sum([abs(l1.item()-l2.item()) for l1,l2 in zip(losses1, losses2)]))
print("Grads Diff:", sum([abs(g1.item()-g2.item()) for g1,g2 in zip(grads1, grads2)]))

两种算法之间的数值差异只是浮点误差。

Losses Diff: 3.337860107421875e-06
Grads Diff: 1.9431114196777344e-05

我还没有在更大的网络中进行测试,但是我在batch_size上进行了测试,并通过网络运行了多个批次,在这个简单的模型中看到了2-3倍的加速。在更复杂的模型中,它应该更为重要,因为除了复制的层之外,您都可以获得批处理的性能优势。

警告:这可能不适用于DataParallel