验证损失恒定,培训损失减少

时间:2020-06-29 16:19:02

标签: deep-learning pytorch trainingloss

我接受了模型训练,得到了这个图。它是通过音频(大约5-10秒中的70K),并且没有进行增强。我尝试了以下方法来避免过拟合:

  • 通过减少GRU单元的数量和隐藏维来减少模型的复杂性。
  • 在每层中添加辍学。
  • 我尝试使用更高的数据集。

我不确定我的训练损失和验证损失的计算是否正确。是这样的。我正在使用drop_last = True,并且正在使用CTC损失标准。

train_data_len = len(train_loader.dataset)
valid_data_len = len(valid_loader.dataset)
epoch_train_loss = 0
epoch_val_loss = 0
train_losses = []
valid_losses = []

    model.train()
    for e in range(n_epochs):
        t0 = time.time()
        #batch loop
        running_loss = 0.0
        for batch_idx, _data in enumerate(train_loader, 1):
            # Calculate output ...
             # bla bla
            loss = criterion(output, labels.float(), input_lengths, label_lengths)
            loss.backward()
            optimizer.step()
            scheduler.step()
            # loss stats
            running_loss += loss.item() * specs.size(0)
                
        t_t = time.time() - t0

            
        ######################    
        # validate the model #
        ######################
        with torch.no_grad():
            model.eval() 
            tv = time.time()
            running_val_loss = 0.0
            for batch_idx_v, _data in enumerate(valid_loader, 1):
                #bla, bla
                val_loss = criterion(output, labels.float(), input_lengths, label_lengths)
                running_val_loss += val_loss.item() * specs.size(0)
        
            print("Epoch {}: Training took {:.2f} [s]\tValidation took: {:.2f} [s]\n".format(e+1, t_t, time.time() - tv))
                
                
        epoch_train_loss = running_loss / train_data_len
        epoch_val_loss = running_val_loss / valid_data_len
        train_losses.append(epoch_train_loss)
        valid_losses.append(epoch_val_loss)
        print('Epoch: {} Losses\tTraining Loss: {:.6f}\tValidation Loss: {:.6f}'.format(
                e+1, epoch_train_loss, epoch_val_loss))
        model.train()

enter image description here

0 个答案:

没有答案