RuntimeError:CUDA内存不足。尝试分配2.86 GiB(GPU 0; 10.92 GiB总容量; ... 9.06 GiB由PyTorch总共保留)

时间:2020-04-16 07:13:39

标签: pytorch gpu nvidia

class Solution: def subsets(self, nums: List[int]) -> List[List[int]]: result = [] curNums = [] def dfs(nums , i): if i == len(nums): result.append(curNums) else: curNums.append(nums[i]) dfs(nums , i+1) curNums.pop() dfs(nums , i+1) dfs(nums , 0) return result 是什么意思。

如果我在相同的脚本中使用较小尺寸的9.06 GiB reserved in total by PyTorch GPU,它会显示7.80 GiB total capacity Pytorch中的保留如何工作,为什么保留的内存会根据GPU的大小而变化?

为解决错误消息,6.20 GiB reserved in total by PyTorch我尝试将批量大小从10减小到5到3。 我尝试使用RuntimeError: CUDA out of memory. Tried to allocate 2.86 GiB (GPU 0; 10.92 GiB total capacity; 9.02 GiB already allocated; 1.29 GiB free; 9.06 GiB reserved in total by PyTorch)删除未使用的张量。我也尝试使用del x_train1。在torch.cuda.empty_cache()应用预训练模型时,以及在训练和验证新模型时,我也使用过with torch.no_grad()。但是它们都不起作用。

这是跟踪:

x_train1 = bert_model(train_indices)[2]

和nvidia-smi退出

cuda:0
    x_train1 = bert_model(train_indices)[2]  # Models outputs are tuples
  File "/home/kosimadukwe/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/kosimadukwe/miniconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 783, in forward
    input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
  File "/home/kosimadukwe/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/kosimadukwe/miniconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 177, in forward
    embeddings = inputs_embeds + position_embeddings + token_type_embeddings
RuntimeError: CUDA out of memory. Tried to allocate 2.86 GiB (GPU 0; 10.92 GiB total capacity; 9.02 GiB already allocated; 1.29 GiB free; 9.06 GiB reserved in total by PyTorch)

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.36       Driver Version: 440.36       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:3B:00.0 Off |                  N/A |
| 54%   79C    P2   233W / 250W |   8613MiB / 11178MiB |    100%   E. Process |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 108...  Off  | 00000000:AF:00.0 Off |                  N/A |
| 58%   79C    P2   247W / 250W |   4545MiB / 11178MiB |      0%   E. Process |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 108...  Off  | 00000000:D8:00.0 Off |                  N/A |
| 23%   29C    P0    56W / 250W |      0MiB / 11178MiB |      2%   E. Process |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0   1025219      C   /usr/pkg/bin/python3.8                      8601MiB |
|    1   1024440      C   /usr/pkg/bin/python3.8                      4535MiB |

0 个答案:

没有答案