仅进行一次培训即可将Pytorch的“ CUDA内存不足”

时间:2019-05-09 03:58:06

标签: python deep-learning pytorch

我目前正在尝试实现类似于U-Net的网络,并且在将数据加载到GPU时遇到一些奇怪的问题。 具体来说,当我使用

将模型加载到GPU时
model = CNN(NUM_LAYERS)
model.to(DEVICE)

GPU上的内存使用量增加到1063 MiB。这告诉我,我的模型的内存大约为1 GB。到目前为止,一切都很好。然后,用

inst = (iter(dataloader)).__next__()
input_tensor = inst[0].to(DEVICE, dtype=torch.float)

我将一批训练数据加载到GPU。 GPU的总内存使用量增加到1093 MiB,这似乎是正确的,并且告诉我这8张训练图像的批处理大小约为30MB。现在我执行

output = model(input_tensor)

,内存使用率猛增到21123 MiB!如果我将批次大小更改为更大的值,例如16,我得到 CUDA out of memory (GPU 1; 31.72 GiB total capacity; 29.91 GiB already allocated; 359.75 MiB free; 402.84 MiB cached)

我试图找出将张量输入模型时到底发生了什么,但是我似乎无法弄清楚为什么GPU内存会突然增加。我的说法正确吗?在这一点上,我应该在GPU上恰好包含三件事:模型,输入张量和输出张量?

数据加载器看起来像这样:

class AsyncCNNData(Dataset):
    def __init__(self, list_paths, resize_dims=None, transform=None, device='cpu'):
        'Initialization'
        super().__init__()

        self.list_paths = list_paths
        self.transform = transform
        self.to_tensor = transforms.ToTensor()
        self.device = device
        self.resize_dims=resize_dims

    def __len__(self):
        'Denotes the total number of samples'
        return len(self.list_paths)

    def __getitem__(self, index):
        'Generates one sample of data'
        # Select sample
        sample_path = self.list_paths[index]

        # Load data and get label
        fs = cv.FileStorage(sample_path, cv.FILE_STORAGE_READ)

        cti_list = []
        for i in range(0, NUM_LAYERS, 1):
            img_name = "layer_{0:03d}".format(i)
            img = (fs.getNode(img_name)).mat()
            diff = np.subtract(self.resize_dims, img.shape)/2
            img = cv.copyMakeBorder(img, int(diff[0]), int(diff[0]), int(diff[1]), int(diff[1]), cv.BORDER_CONSTANT, None, 1)
            cti_list.append(img)

        input_tensor = np.stack(cti_list)
        input_tensor = input_tensor[np.newaxis, :, :, :]

        img = (fs.getNode("frame")).mat()
        diff = np.subtract(self.resize_dims, img.shape)/2
        img = cv.copyMakeBorder(img, int(diff[0]), int(diff[0]), int(diff[1]), int(diff[1]), cv.BORDER_CONSTANT, None, 1)
        img = img[:, :, np.newaxis]
        output_tensor = self.to_tensor(img)

        if self.transform != None:
            input_tensor = self.transform(input_tensor)
        else:
            input_tensor = torch.from_numpy(input_tensor)

        return input_tensor, output_tensor, sample_path

yaml_files = pd.read_csv(DATA_PATH).values.flatten().tolist()
data = AsyncCNNData(yaml_files, device=DEVICE, resize_dims=RESIZE)

train_size = int(len(data)*TRAIN_RATIO)
data_train, data_test = random_split(data, [train_size, len(data)-train_size])
dataloader = DataLoader(data_train, batch_size=BATCH_SIZE, shuffle=True)

非常感谢!

0 个答案:

没有答案