带有自定义数据集和模型的collat​​e_fn()的Pytorch Feeding Dataloader批处理不起作用

时间:2019-09-16 14:29:51

标签: pytorch torchvision

嗨,我已经使用原始和目标是图像的数据集类创建了一个自定义数据集,例如语义分割和pix2pix,我正在使用Imagefolder和一个自定义整理功能加载数据集,并且尝试使用dataloader加载我的自定义数据集进行训练神经网络,但发生错误,提示输入应为张量而不是列表:

我的整理功能

def my_collate(batch):
data = [item[0] for item in batch]
target = [item[1] for item in batch]
return [data, target]

数据集类:

class bsds_dataset(Dataset):
def __init__(self, ds_main, ds_energy):
    self.dataset1 = ds_main
    self.dataset2 = ds_energy

def __getitem__(self, index):
    x1 = self.dataset1[index]
    x2 = self.dataset2[index]

    return x1, x2

def __len__(self):
    return len(self.dataset1)

加载数据集:

generic_transform = transforms.Compose([
transforms.ToTensor(),
transforms.ToPILImage(),
#transforms.CenterCrop(size=128),
#transforms.Lambda(lambda x: myimresize(x, (128, 128))),
transforms.ToTensor(),
#transforms.Normalize((0., 0., 0.), (6, 6, 6))
])
original_imagefolder = './images/whole'
target_imagefolder = './results/whole'

original_ds = ImageFolder(original_imagefolder, 
transform=generic_transform)
energy_ds = ImageFolder(target_imagefolder, transform=generic_transform)

dataset = bsds_dataset(original_ds, energy_ds)
loader = DataLoader(dataset, batch_size=16, collate_fn=my_collate)

epochs = 2
model = UNet(1, depth=5, merge_mode='concat')
model.cuda()
loss = torch.nn.MSELoss()
criterion_pixelwise = torch.nn.L1Loss()

loss.cuda()
criterion_pixelwise.cuda()

optimizer = optim.SGD(model.parameters(), lr=0.001)

Tensor = torch.cuda.FloatTensor
#main loop
for epoch in range(epochs):
   for i, batch in enumerate(loader):
       original, target = batch
       out = model(original)

发生此错误:

  TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not 
  list

我正在尝试将批处理的每个实例转发到模型中。迭代中的批处理是列表,但是应该是张量,我不知道如何将其转换为张量或加载每个批处理实例。请帮忙。非常感谢。

完整追溯:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-147-d1dea1bc00f8> in <module>
     15     for i, batch in enumerate(loader):
     16         original, target = batch
---> 17         out = model(original)

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

<ipython-input-7-5f743c3455c4> in forward(self, x)
     89         # encoder pathway, save outputs for merging
     90         for i, module in enumerate(self.down_convs):
---> 91             x, before_pool = module(x)
     92             encoder_outs.append(before_pool)
     93 

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

<ipython-input-5-26a0f7e21ea6> in forward(self, x)
     14 
     15     def forward(self, x):
---> 16         x = F.relu(self.conv1(x))
     17         x = F.relu(self.conv2(x))
     18         before_pool = x

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
    336                             _pair(0), self.dilation, self.groups)
    337         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 338                         self.padding, self.dilation, self.groups)
    339 
    340 

TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not list

1 个答案:

答案 0 :(得分:0)

您的整理功能是问题所在。即使单个数据集样本是张量,批次也是一个包含两个列表元素的列表。

至少,datatarget 都需要是张量。有关详细信息,请参阅 here

也许这样的事情会奏效:

def my_collate(batch):
    data = [item[0] for item in batch]
    target = [item[1] for item in batch]
    return torch.Tensor(data), torch.Tensor(target)