当压缩两个循环割炬数据加载器和一个割炬数据加载器时,两个循环割炬数据加载器在最后一步中的行为有所不同

时间:2019-05-09 13:20:21

标签: python pytorch

unlabelledloader2 = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler = unlabelled_sampler2,  num_workers=workers, pin_memory=True)
unlabelledloader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler = unlabelled_sampler,  num_workers=workers, pin_memory=True)
train_loader= torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler = train_sampler,  num_workers=workers, pin_memory=True)
for (input, target), (u, _), (u2, _) in zip(cycle(trainloader), unlabelledloader, cycle(unlabelledloader2):

trainloader和unlabelledloader2都有1000个样本,而unlabelledloader有2037个样本。所有这些都设置为每个步骤弹出100个样本。

在最后一步,我希望unlabelledloader会弹出37个样本,而trainloader和unlabelledloader2会弹出相同数量的样本(37或100)。但是,发生了奇怪的事情,trainloader弹出了37,而unlabelledloader2弹出了100。

有人可以解释吗?非常感谢!

0 个答案:

没有答案