我正在尝试使用预先训练的网络对花朵进行分类,但是由于某种原因,它无法训练

时间:2018-07-16 16:42:33

标签: python python-3.x deep-learning classification pytorch

我目前正在尝试使用Pytorch对dataset中的花朵进行分类。

首先,我开始从我的数据转换为训练,验证和测试集。

data_dir = 'flowers'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'

train_transforms = transforms.Compose([transforms.RandomRotation(30),
                                       transforms.RandomResizedCrop(224),
                                       transforms.RandomHorizontalFlip(),
                                       transforms.ToTensor(),
                                       transforms.Normalize([0.485, 0.456, 0.406], 
                                                            [0.229, 0.224, 0.225])])

test_transforms = transforms.Compose([transforms.Resize(224),
                                      transforms.CenterCrop(224),
                                      transforms.ToTensor(),
                                      transforms.Normalize([0.485, 0.456, 0.406], 
                                                           [0.229, 0.224, 0.225])])

然后,我用ImageFolder加载了数据:

trainset = datasets.ImageFolder(train_dir, transform=train_transforms)
testset = datasets.ImageFolder(test_dir, transform=test_transforms)
validationset = datasets.ImageFolder(valid_dir, transform=test_transforms)

然后我定义了我的DataLoader:

trainloader = torch.utils.data.DataLoader(trainset, batch_size = 64, shuffle = True)
testloader = torch.utils.data.DataLoader(testset, batch_size = 32)
validationloader = torch.utils.data.DataLoader(validationset, batch_size = 32)

我选择vgg作为我的预训练模型:

model = models.vgg16(pretrained = True)

并定义了一个新的分类器:

for param in model.parameters():
    param.requires_grad = False

classifier = nn.Sequential(OrderedDict([
    ('fc1', nn.Linear(25088, 4096)),
    ('relu', nn.ReLU()),
    ('fc2', nn.Linear(4096, 4096)),
    ('relu', nn.ReLU()),
    ('fc3', nn.Linear(4096, 102)),
    ('output', nn.Softmax(dim = 1))

]))

model.classifier = classifier 

这是实际训练我的NN(在GPU上)的代码:

criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(), lr = 0.005)

epochs = 9
print_every = 10
steps = 0

model.to('cuda')

for e in range(epochs):
    running_loss = 0

    for ii, (inputs, labels) in enumerate(trainloader):
        steps += 1



        inputs, labels = inputs.to('cuda'), labels.to('cuda')

        optimizer.zero_grad()

        # Forward and backward 
        outputs = model.forward(inputs)
        loss = criterion(outputs, labels)



        loss.backward()
        optimizer.step()

        running_loss += loss.item()



        if steps % print_every == 0:
            print("Epoch: {}/{}... ".format(e+1, epochs),
                  "Loss: {:.4f}".format(running_loss/print_every))

            running_loss = 0

但是当我运行模型时,损失是随机的,我不确定为什么。

感谢您提前提供的任何帮助和来自德国的问候!

1 个答案:

答案 0 :(得分:1)

以下是一些提示-我认为这些提示会有所帮助:

  1. 尝试进行一些超参数优化。 (即,在1e-2至1e-6之类的域中尝试10种学习率)有关以下内容的详细信息:(http://cs231n.github.io/neural-networks-3/#hyper
  2. 编写代码并打印精度度量标准(损失时打印出来),因为您可能会惊讶于预先训练的模型精度有多高。
  3. 尝试切换到model = models.vgg16_bn(pretrained = True)以及更大的网络,例如vgg 19或resnet34

您能否包括您的准确性和每个时期的损失?

让我知道这些提示是否有帮助!

(来自美国的你好)