验证损失低于训练损失,但准确性较差

时间:2019-09-29 17:06:41

标签: python machine-learning neural-network deep-learning pytorch

我正在训练FFNN进行分类,想知道为什么我的验证损失似乎总是比训练损失低,因为验证的准确性也比训练差。

我发现了一些类似的问题,导致验证数据集本身可能比训练集具有更好的性能,但在损失和准确性较低的情况下却没有。

enter image description here

这是我用于训练我的pytorch NN的代码,包括损失计算:

optimizer = optim.Adam(model_pyt.parameters(), lr=learning_rate, betas=(0.9,0.999))
criterion = nn.CrossEntropyLoss()

for epoch in range(epochs):
    start_time = time.time()
    train_running_loss = 0
    train_acc = 0
    with torch.set_grad_enabled(True):
        for i, data_pack in enumerate(training_generator):
            x_data, y_data = data_pack 
            optimizer.zero_grad()                
            outputs = model_pyt(x_data)
            loss = criterion(outputs, y_data)
            loss.backward()
            optimizer.step()
            train_running_loss += loss.detach().item()
            train_acc += get_accuracy(outputs, y_data, batch_size)
    test_labels = torch.tensor(labels_test).long()
    test_inputs = torch.tensor(np.array(data_bal_test)).float()
    test_outputs = model_pyt(test_inputs)
    test_loss = criterion(test_outputs, test_labels).detach().item()
    test_acc = get_accuracy(test_outputs, test_labels, len(test_labels))
    print('Epoch:  %d | Loss: %.4f | Acc %.4f | Test-Loss: %.4f | Test-Acc %.4f | Time Elapsed: %s' 
          %(epoch+1, train_running_loss/(i+1), train_acc/(i+1), loss, test_acc, time_since(start_time)))
    print('=====================================================================================================')    

0 个答案:

没有答案