使用预先训练的inceptionV3模型进行图像分类时输入大小不匹配错误

时间:2018-01-21 14:15:22

标签: computer-vision deep-learning pytorch

我在使用预先训练的inceptionV3为我自己的图像数据集训练模型时遇到了麻烦。

我正在使用data.Dataset loader和'转换'加载图像。用于图像转换。

这是我的inceptionV3模型

inceptionV3 = torchvision.models.inception_v3(pretrained=True)
pretrained_model = nn.Sequential(*list(inceptionV3.children()))
pretrained_model = nn.Sequential(*list(pretrained_features.children())[:-1])
for param in pretrained_model.parameters(): param.requires_grad = False

这是我的变换代码

data_transforms = transforms.Compose([
transforms.Scale((299,299)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

我在第' out_features = pretrained_model(输入)'

的行中收到此错误
Traceback (most recent call last): File "build_models.py", line 42, in <module> use_gpu=use_gpu) 
File "/home/ubuntu/apparel-styles/ml_src/classifiers.py", line 488, in create_attributes_model batch_size, num_workers, num_epochs, use_gpu=use_gpu) 
File "/home/ubuntu/apparel-styles/ml_src/classifiers.py", line 276, in train_model flatten_pretrained_out=flatten_pretrained_out) 
File "/home/ubuntu/apparel-styles/ml_src/classifiers.py", line 181, in train_attribute_model out_features = pretrained_model(inputs) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 68, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 78, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply raise output 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 42, in _worker output = module(*input, **kwargs) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward input = module(input)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torchvision/models/inception.py", line 312, in forward x = self.fc(x) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) 
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 835, in linear return torch.addmm(bias, input, weight.t()) RuntimeError: size mismatch at /pytorch/torch/lib/THC/generic/THCTensorMathBlas.cu:243

收到此错误后,我尝试打印输入大小。它的价值是

print(inputs.shape)

torch.Size([256, 3, 299, 299])

0 个答案:

没有答案