假定我有77个样本来训练CNN,而我的batch size
是10。那么最后一批的batch size
是7而不是10。例如nn.MSELoss()
,它会给我错误:
RuntimeError:张量a(10)的大小必须与张量的大小匹配 b(7)在非单维度1
那么pytorch不支持不同大小的批处理吗?
import numpy as np
import torch
from torch import nn
import torchvision
import torch.nn.functional as F
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, (5,4))
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(64, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, x.shape[1] * x.shape[2] * x.shape[3])
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
batch_size = 10
# Generating Artifical data
x_train = torch.randn((77,1,20,20))
y_train = torch.randint(0,10,size=(77,),dtype=torch.float)
trainset = torch.utils.data.TensorDataset(x_train,y_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=0)
# testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=0)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
for epoch in range(20): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i%10==0:
print('epoch{}, step{}, loss: {}'.format(epoch + 1, i + 1, running_loss))
# print("frac post = {}".format(frac_post))
running_loss = 0.0
答案 0 :(得分:2)
问题不是由于批处理大小,而是由于未能在CNN的10个输出和每个示例中提供的单个标签之间正确广播。 如果您在抛出错误的批处理期间查看模型输出并标记张量形状,则
$employees = Employee::model()->findAllByAttributes(array('employment_status' => 'active'), 'account_type <> "admin"');
您将看到标签存储在单例张量中。根据{{3}},两个张量必须在所有尾随尺寸上兼容,才能广播。在这种情况下,模型输出(10)的尾随尺寸与标签(7)的尾随尺寸不兼容。
要解决此问题,请在标签上添加一个虚拟尺寸(假设您实际上要广播标签以匹配您的十个网络输出),或者定义一个具有标量输出的网络。例如:
print(outputs.shape, labels.shape)
#out: torch.Size([7, 10]) torch.Size([7])
产生
y_train = torch.randint(0,10,size=(77,1),dtype=torch.float)