我刚开始使用PyTorch ...
目前,我正在采用这种将训练数据发送到GPU的方法。来源:https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel
# Loop over epochs
for epoch in range(max_epochs):
# Training
for local_batch, local_labels in training_generator:
# Transfer to GPU
local_batch, local_labels = local_batch.to(device), local_labels.to(device)
# Model computations
[...]
问题:这似乎比仅将整个数据集加载到GPU效率低。在每个批次上加载数据不会引起延迟吗?我想念什么吗?