我正在尝试使用Pytorch对猫和狗的图像数据集进行分类。在我的代码中,我到目前为止下载数据并进入文件夹列,其中有两个文件夹,名为" cats"和#34;狗。"然后我尝试将这些数据加载到数据加载器中并进行批量迭代,但它给了我一些我在迭代步骤中无法理解的错误。
由于是Google Colabs,我在那里有代码用于下载数据和安装库。到目前为止,对我的代码的任何其他建议也将受到赞赏。
!pip install torch
!pip install torchvision
from __future__ import print_function, division
import os
import torch
import pandas as pd
import numpy as np
# For showing and formatting images
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# For importing datasets into pytorch
import torchvision.datasets as dataset
# Used for dataloaders
import torch.utils.data as data
# For pretrained resnet34 model
import torchvision.models as models
# For optimisation function
import torch.nn as nn
import torch.optim as optim
!wget http://files.fast.ai/data/dogscats.zip
!unzip dogscats.zip
batch_size = 256
train_raw = dataset.ImageFolder(PATH+"train", transform=transforms.ToTensor())
train_loader = data.DataLoader(train_raw, batch_size=batch_size, shuffle=True)
for batch_idx, (data, target) in enumerate(train_loader):
print("Data: ", batch_idx)
错误出现在最后一行,如下所示:
RuntimeErrorTraceback (most recent call last)
<ipython-input-66-c32dd0c1b880> in <module>()
----> 1 for batch_idx, (data, target) in enumerate(train_loader):
2 print("Data: ", batch_idx)
3
/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.pyc in __next__(self)
257 if self.num_workers == 0: # same-process loading
258 indices = next(self.sample_iter) # may raise StopIteration
--> 259 batch = self.collate_fn([self.dataset[i] for i in indices])
260 if self.pin_memory:
261 batch = pin_memory_batch(batch)
/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.pyc in default_collate(batch)
133 elif isinstance(batch[0], collections.Sequence):
134 transposed = zip(*batch)
--> 135 return [default_collate(samples) for samples in transposed]
136
137 raise TypeError((error_msg.format(type(batch[0]))))
/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.pyc in default_collate(batch)
110 storage = batch[0].storage()._new_shared(numel)
111 out = batch[0].new(storage)
--> 112 return torch.stack(batch, 0, out=out)
113 elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
114 and elem_type.__name__ != 'string_':
/usr/local/lib/python2.7/dist-packages/torch/functional.pyc in stack(sequence, dim, out)
62 inputs = [t.unsqueeze(dim) for t in sequence]
63 if out is None:
---> 64 return torch.cat(inputs, dim)
65 else:
66 return torch.cat(inputs, dim, out=out)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 400 and 487 in dimension 2 at /pytorch/torch/lib/TH/generic/THTensorMath.c:2897
由于
答案 0 :(得分:1)
我在您的代码中看到两个问题,首先您将导入torch.utils.data作为数据导入,然后再次替换数据加载器中的数据。请将导入的模块和变量名称保存在单独的命名空间中。我认为这个错误可能是因为dataloder(图像)和标签返回的数据大小不同。如您所见,连接中存在错误,因为第一个维度即。文件夹中的标签大小和图像数量不匹配。希望这会有所帮助。
答案 1 :(得分:1)
我认为主要问题是图像尺寸不同。我可能以其他方式理解ImageFolder但是,如果目录结构在pytorch中指定,我认为你不需要图像标签,而pytorch会为你找出标签。 我还会为您的转换添加更多内容,自动调整文件夹中的每个图像的大小,例如:
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
transform = transforms.Compose(
[transforms.ToTensor(),transforms.Resize((224,224)),
normalize])
此外,您可以使用其他技巧使您的DataLoader更快,例如添加batch_size和cpu worker的数量,例如:
testloader = DataLoader(testset, batch_size=16,
shuffle=False, num_workers=4)
我认为这会让你的管道更快。
答案 2 :(得分:0)
我认为我对Manoj Acharya的评论错了,问题在于将batch_size放入dataloader。我阅读了下面的来源,似乎你不能用不同的尺寸批量处理图像:
https://medium.com/@yvanscher/pytorch-tip-yielding-image-sizes-6a776eb4115b
所以在更改数据变量后我的代码Manoj指出我将batch_size更改为1并且程序停止失败。我想将它分批放入,所以我添加了一个进一步的转换CenterCrop()来将所有图像调整为相同的大小。以下是我的新代码:
!pip install torch
!pip install torchvision
from __future__ import print_function, division
import os
import torch
import pandas as pd
import numpy as np
# For showing and formatting images
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# For importing datasets into pytorch
import torchvision.datasets as dataset
# Used for dataloaders
from torch.utils.data import DataLoader
# For pretrained resnet34 model
import torchvision.models as models
# For optimisation function
import torch.nn as nn
import torch.optim as optim
# For turning data into tensors
import torchvision.transforms as transforms
!wget http://files.fast.ai/data/dogscats.zip
!unzip dogscats.zip
batch_size = 256
sz = 224
train_raw = dataset.ImageFolder(PATH+"train", transform=transforms.Compose([transforms.CenterCrop(sz),transforms.ToTensor()]))
train_loader = DataLoader(train_raw,batch_size=batch_size, shuffle=True)
for batch_idx, (data, target) in enumerate(train_loader):
print("Data: ", batch_idx)
由于