自定义数据集类的Pytorch问题

时间:2020-06-30 08:25:48

标签: pytorch

首先,我创建了一个自定义数据集,以从数据框中加载图像(包含图像文件路径和相应的int标签):

class Dataset(torch.utils.data.Dataset):

    def __init__(self, dataframe, transform=None):
        self.frame = dataframe
        self.transform = transform

    def __len__(self):
        return len(self.frame)

    def __getitem__(self, idx):
        if torch.is_tensor(idx):
            idx = idx.tolist()

        filename = self.frame.iloc[idx, 0]
        image = torch.from_numpy(io.imread(filename).transpose((2, 0, 1))).float()
        label = self.frame.iloc[idx, 1]
        sample = {'image': image, 'label': label}
        if self.transform:
            sample = self.transform(sample)
        return sample

然后,我使用现有的模型架构,如下所示:

model = models.densenet161()
num_ftrs = model.classifier.in_features
model.classifier = nn.Linear(num_ftrs, 10)  # where 10 is my number of classes

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

最后,为了进行培训,我执行以下操作:

model.train()  # switch to train mode
        
for epoch in range(5):
    for i, sample in enumerate(train_set):  # where train_set is an instance of my Dataset class
        optimizer.zero_grad()
        image, label = sample['image'].unsqueeze(0), torch.Tensor(sample['label']).long()
        output = model(image)

        loss = criterion(output, label)
        loss.backward()
        optimizer.step()

但是,我遇到loss = criterion(output, label)错误。它告诉我ValueError: Expected input batch_size (1) to match target batch_size (2).。有人可以教我如何正确使用自定义数据集,尤其是批量加载数据吗?另外,为什么我会遇到ValueError?谢谢!

1 个答案:

答案 0 :(得分:0)

请检查以下几行:

label = self.frame.iloc[idx, 1]在数据集定义中,您可以将其打印以重新检查,是否返回两个整数

image, label = sample['image'].unsqueeze(0), torch.Tensor(sample['label']).long()在训练代码中,您需要检查张量的形状