pytorch cuda评估中的内存不足

时间:2019-12-29 18:55:55

标签: deep-learning pytorch


我正在尝试在最后一层提取VGG19功能(尺寸为:(frames,49,512)。
以下是我的代码:

 deep_net = models.vgg19(pretrained=True).to('cuda:0,1,2,3')
 deep_net.eval()
 for idx_time, f1 in tqdm(enumerate(dics)):
   input_image = torch.zeros(num_sample, 3, 224, 224)
   output_feat = np.zeros(shape=[num_sample, 49, 512])

      with torch.no_grad():
          for i, idx in enumerate(indices):
              im = default_loader(os.path.join(root_folder, f1, ('frame' + str(idx) + '.jpg')))
              im = transform(im)
              input_image[i, :, :] = im

          input_image = input_image.to('cuda:0,1,2,3')
          output_feat = deep_net.features(input_image).view(num_sample, 512, 49).transpose(1, 2)
      np.save(save_file_sample_path, output_feat.cpu())


但是我收到以下错误:

RuntimeError: CUDA out of memory. Tried to allocate 6.10 GiB (GPU 0; 10.92 GiB total capacity; 6.92 GiB already allocated; 587.81 MiB free; 2.85 GiB cached)


在以下行:

output_feat = deep_net.features(input_image).view(num_sample, 512, 49).transpose(1, 2)
     


  任何想法为什么会这样,尽管我将模型设置为评估模式并使用no_grad()?   
  谢谢

0 个答案:

没有答案