PyTorch RuntimeError:CUDA内存不足。尝试分配14.12 GiB

时间:2020-07-21 08:34:46

标签: pytorch

对于一个简单的完全连接的图层模型,我遇到了 Cuda内存不足错误。我已经尝试过torch.cuda.empty_cache()gc.collect()。另外,我用del删除了不必要的变量,并尝试通过减小批大小来进行尝试。但是错误没有解决。此外,该错误仅对使用1440张测试图像进​​行评估的SUN数据集显示。但是该代码对于AWA2数据集运行得很好,没有。测试图像数为7913。我在这里使用google colab。我也使用过RTX 2060。 这是错误的代码段:

def euclidean_dist(x, y):
    # x: N x D
    # y: M x D
    torch.cuda.empty_cache()
    n = x.size(0)
    m = y.size(0)
    d = x.size(1)
    assert d == y.size(1)
    x = x.unsqueeze(1).expand(n, m, d)
    y = y.unsqueeze(0).expand(n, m, d)
    del n,m,d
    return torch.pow(x - y, 2).sum(2)

def compute_accuracy(test_att, test_visual, test_id, test_label):
    global s2v
    s2v.eval()
    with torch.no_grad():
        test_att = Variable(torch.from_numpy(test_att).float().to(device))
        test_visual = Variable(torch.from_numpy(test_visual).float().to(device))
        outpre = s2v(test_att, test_visual)
        del test_att, test_visual
        outpre = torch.argmax(torch.softmax(outpre, dim=1), dim=1)
    
    outpre = test_id[outpre.cpu().data.numpy()]
    
    #compute averaged per class accuracy
    test_label = np.squeeze(np.asarray(test_label))
    test_label = test_label.astype("float32")
    unique_labels = np.unique(test_label)
    acc = 0
    for l in unique_labels:
        idx = np.nonzero(test_label == l)[0]
        acc += accuracy_score(test_label[idx], outpre[idx])
    acc = acc / unique_labels.shape[0]
    return acc   

错误是:

Traceback (most recent call last):   File "GBU_new_v2.py", line 234, in <module>
    acc_seen_gzsl = compute_accuracy(attribute, x_test_seen, np.arange(len(attribute)), test_label_seen)   File "GBU_new_v2.py", line 111, in compute_accuracy
    outpre = s2v(test_att, test_visual)   File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)   File "GBU_new_v2.py", line 80, in forward
    a1 = euclidean_dist(feat, a1)   File "GBU_new_v2.py", line 62, in euclidean_dist
    return torch.pow(x - y, 2).sum(2)#.sqrt() # return: N x M RuntimeError: CUDA out of memory. Tried to allocate 14.12 GiB (GPU 0;
15.90 GiB total capacity; 14.19 GiB already allocated; 669.88 MiB free; 14.55 GiB reserved in total by PyTorch)

1 个答案:

答案 0 :(得分:2)

似乎您已定义仅用于培训的批次,而在测试过程中,您尝试同时处理整个测试集。
您应该将测试集拆分为较小的“批次”,并一次评估一个批次,以将所有批次的分数合并在一起,最后得出模型的一个分数。