如何使用多GPU _ BERT

时间:2020-10-23 14:05:52

标签: python gpu bert-language-model

我使用以下BERT代码在多个GPU上进行分析。

model = BertForSequenceClassification.from_pretrained(
    "beomi/kcbert-large", 
    num_labels = len(df['label'].unique()), 
    output_attentions = False, 
    output_hidden_states = False,
)

model = torch.nn.DataParallel(model)
model.cuda()

使用一个GPU进行分析时,可以毫无问题地进行分析。 (没有模型= torch.nn.DataParallel(model))

但是之后

model = torch.nn.DataParallel(model)

有错误

import random
import numpy as np

# This training code is based on the `run_glue.py` script here:
# https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128

# Set the seed value all over the place to make this reproducible.
seed_val = 42

random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)

training_stats = []
total_t0 = time.time()
for epoch_i in range(0, epochs):
    
    # ========================================
    #               Training
    # ========================================
    
    # Perform one full pass over the training set.

    t0 = time.time()

    total_train_loss = 0
    total_train_accuracy = 0


    for step, batch in enumerate(train_dataloader):

        if step % 40 == 0 and not step == 0:
            elapsed = format_time(time.time() - t0)
            print('  Batch {:>5,}  of  {:>5,}.    Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))


        b_input_ids = batch[0].to(device)
        b_input_mask = batch[1].to(device)
        b_labels = batch[2].to(device)


        model.zero_grad()        


        loss, logits = model(b_input_ids, 
                             token_type_ids=None, 
                             attention_mask=b_input_mask, 
                             labels=b_labels)


        total_train_loss += loss.item()
        
        logits = logits.detach().cpu().numpy()
        label_ids = b_labels.to('cpu').numpy()
        
        total_train_accuracy += flat_accuracy(logits, label_ids)

        loss.backward()

        torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)

        optimizer.step()
        scheduler.step()
    avg_train_loss = total_train_loss / len(train_dataloader)            
    
    training_time = format_time(time.time() - t0)
    avg_train_accuracy = total_train_accuracy / len(train_dataloader)
    

我遇到以下问题: ValueError:只能将一个元素张量转换为Python标量

-> total_train_loss + = loss.item()

我不知道发生了什么错误,

请帮助。 谢谢

0 个答案:

没有答案