Pytorch即使检测到GPU也未使用GPU

时间:2018-10-29 10:39:49

标签: python gpu pytorch

我将Windows 10 jupyter笔记本作为服务器,并在其上运行了一些火车。

我已经正确安装了CUDA 9.0和cuDNN,并且python检测到了GPU。这就是我在anaconda提示符下得到的内容。

>>> torch.cuda.get_device_name(0)
'GeForce GTX 1070'

我还通过.cuda()将模型和张量放置在cuda上

model = LogPPredictor(1, 58, 64, 128, 1, 'gsc')

if torch.cuda.is_available():
    torch.set_default_tensor_type(torch.cuda.DoubleTensor)
    model.cuda()
else:
    torch.set_default_tensor_type(torch.FloatTensor)

list_train_loss = list()
list_val_loss = list()
acc = 0
mse = 0

optimizer = args.optim(model.parameters(),
                       lr=args.lr,
                       weight_decay=args.l2_coef)

data_train = DataLoader(args.dict_partition['train'], 
                        batch_size=args.batch_size,
                        pin_memory=True,
                        shuffle=args.shuffle)

data_val = DataLoader(args.dict_partition['val'],
                     batch_size=args.batch_size,
                     pin_memory=True,
                     shuffle=args.shuffle)

for epoch in tqdm_notebook(range(args.epoch), desc='Epoch'):
    model.train()
    epoch_train_loss = 0
    for i, batch in enumerate(data_train):
        list_feature = torch.tensor(batch[0]).cuda()
        list_adj = torch.tensor(batch[1]).cuda()
        list_logP = torch.tensor(batch[2]).cuda()
        list_logP = list_logP.view(-1,1)

        optimizer.zero_grad()
        list_pred_logP = model(list_feature, list_adj)
        list_pred_logP.require_grad = False
        train_loss = args.criterion(list_pred_logP, list_logP)
        epoch_train_loss += train_loss.item()
        train_loss.backward()
        optimizer.step()

    list_train_loss.append(epoch_train_loss/len(data_train))

    model.eval()
    epoch_val_loss = 0
    with torch.no_grad():
        for i, batch in enumerate(data_val):
            list_feature = torch.tensor(batch[0]).cuda()
            list_adj = torch.tensor(batch[1]).cuda()
            list_logP = torch.tensor(batch[2]).cuda()
            list_logP = list_logP.view(-1,1)


            list_pred_logP = model(list_feature, list_adj)
            val_loss = args.criterion(list_pred_logP, list_logP)
            epoch_val_loss += val_loss.item()

    list_val_loss.append(epoch_val_loss/len(data_val))

data_test = DataLoader(args.dict_partition['test'],
                   batch_size=args.batch_size,
                   pin_memory=True,
                   shuffle=args.shuffle)

model.eval()
with torch.no_grad():
    logP_total = list()
    pred_logP_total = list()
    for i, batch in enumerate(data_val):
        list_feature = torch.tensor(batch[0]).cuda()
        list_adj = torch.tensor(batch[1]).cuda()
        list_logP = torch.tensor(batch[2]).cuda()
        logP_total += list_logP.tolist()
        list_logP = list_logP.view(-1,1)


    list_pred_logP = model(list_feature, list_adj)

    pred_logP_total += list_pred_logP.tolist()

mse = mean_squared_error(logP_total, pred_logP_total)

但是在Windows的Process Manager上,每当我开始培训时,只有CPU使用率上升到25%,GPU使用率保持为0。如何解决此问题??

1 个答案:

答案 0 :(得分:0)

我在Cuda上使用PyTorch时遇到类似的问题。在寻找可能的解决方案之后,我发现了Soumith本人的以下帖子,发现它非常有帮助。

https://discuss.pytorch.org/t/gpu-supposed-to-be-used-but-isnt/2883

最起码,至少在我看来,我无法在GPU上施加足够的负载。我的应用程序存在瓶颈。尝试另一个示例,或增加批处理大小;没关系。