我正在尝试使用AlexNet模型训练数据集。任务是多类分类(15个类)。我想知道为什么我的准确性很低。 我尝试了不同的学习率,但并没有提高。
这是培训的摘录。
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=1e-3, momentum=0.9)
#optimizer = optim.Adam(model.parameters(), lr=1e-2) # 1e-3, 1e-8
def train_valid_model():
num_epochs=5
since = time.time()
out_loss = open("history_loss_AlexNet_exp1.txt", "w")
out_acc = open("history_acc_AlexNet_exp1.txt", "w")
losses=[]
ACCes =[]
#losses = {}
for epoch in range(num_epochs): # loop over the dataset multiple times
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 50)
if epoch % 10 == 9:
torch.save({
'epoch': epoch + 1,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss
}, 'AlexNet_exp1_epoch{}.pth'.format(epoch+1))
for phase in ['train', 'valid', 'test']:
if phase == 'train':
model.train()
else:
model.eval()
train_loss = 0.0
total_train = 0
correct_train = 0
for t_image, target, image_path in dataLoaders[phase]:
#print(t_image.size())
#print(target)
t_image = t_image.to(device)
target = target.to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
outputs = model(t_image)
outputs = F.softmax(outputs, dim=1)
loss = criterion(outputs,target)
if phase == 'train':
loss.backward()
optimizer.step()
_, predicted = torch.max(outputs.data, 1)
train_loss += loss.item()* t_image.size(0)
correct_train += (predicted == target).sum().item()
epoch_loss = train_loss / len(dataLoaders[phase].dataset)
#losses[phase] = epoch_loss
losses.append(epoch_loss)
epoch_acc = 100 * correct_train / len(dataLoaders[phase].dataset)
ACCes.append(epoch_acc)
print('{} Loss: {:.4f} {} Acc: {:.4f}'.format(phase, epoch_loss, phase, epoch_acc))
这是两个纪元的输出
火车损失:2.7026火车Acc:17.2509 有效损失:2.6936有效累积:28.7632 测试损失:2.6936测试帐户:28.7632
火车损失:2.6425火车Acc:17.8019 有效损失:2.6357有效累积:28.7632 测试损失:2.6355测试帐户:28.7632
答案 0 :(得分:0)
只是一个基本提示,它可能会帮助您入门,
import torchvision.models as models
alexnet = models.alexnet(pretrained=True)
使用alexnet时,您可以从预先训练的模型开始,我在您的代码中没有看到。 如果只需要15个类,请确保在最后删除完全连接的层,并添加具有15个输出的新fc层,
您的alexnet看起来像这样:
AlexNet(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
(1): ReLU(inplace)
(2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(4): ReLU(inplace)
(5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU(inplace)
(8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): ReLU(inplace)
(10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace)
(12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
(classifier): Sequential(
(0): Dropout(p=0.5)
(1): Linear(in_features=9216, out_features=4096, bias=True)
(2): ReLU(inplace)
(3): Dropout(p=0.5)
(4): Linear(in_features=4096, out_features=4096, bias=True)
(5): ReLU(inplace)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
因此,您仅需要删除分类器(6)层。 我认为here回答了如何删除fc6。
对于多标签分类,模型的最后一层应使用S型函数进行标签预测,而训练过程应使用binary_crossentropy函数或nn.BCELoss
。