我正在尝试这段代码。
https://github.com/Dannynis/xvector_pytorch/blob/master/xvector%20-%20gpu.ipynb
在此代码中,我更改为:
trainloader = torch.utils.data.DataLoader(list(zip(X,y)), shuffle=True, pin_memory=True, batch_size=256, num_workers=8)
并且花了很长时间(〜23s)一步:
loss: 1.8956436561879668 step took 23.29119372367859
我认为这与这段代码有关,将256个向量复制到cuda花了这么长时间?
for i, data in enumerate(trainloader, 0):
a = time.time()
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs, labels = inputs.cuda(), labels.cuda()
我想知道我是否可以做些什么来加快data_loader的速度?