BERT Google Colab中的多类文本分类

时间:2019-06-23 15:42:44

标签: python pytorch data-science google-colaboratory bert-language-model

我正在处理一组社交媒体评论(包括youtube链接)作为输入功能,并将Myers-Biggs个性配置文件作为目标标签:

    type    posts
0   INFJ    'http://www.youtube.com/watch?v=qsXHcwe3krw|||...
1   ENTP    'I'm finding the lack of me in these posts ver...
2   INTP    'Good one _____ https://www.youtube.com/wat...
3   INTJ    'Dear INTP, I enjoyed our conversation the o...
4   ENTJ    'You're fired.|||That's another silly misconce...

但是从我发现的情况来看,BERT希望DataFrame的格式如下:

a   label   posts
0   a   8   'http://www.youtube.com/watch?v=qsXHcwe3krw|||...
1   a   3   'I'm finding the lack of me in these posts ver...
2   a   11  'Good one _____ https://www.youtube.com/wat...
3   a   10  'Dear INTP, I enjoyed our conversation the o...
4   a   2   'You're fired.|||That's another silly misconce...

结果输出必须是对测试注释集的预测,该注释集分为四列,每个个性概要集对应一列,例如'Mind'= 1是外向的标签。基本上将INFJ之类的类型分为“思维”,“能量”,“自然”,“战术”,例如:

    type    post    Mind    Energy  Nature  Tactics
0   INFJ    'url-web    0   1   0   1
1   INFJ    url-web 0   1   0   1
2   INFJ    enfp and intj moments url-web sportscenter n... 0   1   0   1
3   INFJ    What has been the most life-changing experienc...   0   1   0   1
4   INFJ    url-web url-web On repeat for most of today.    0   1   0   1

我使用以下命令安装了pytorch-pretrained-bert

!pip install pytorch-pretrained-bert

我已经导入了模型,并尝试使用以下方式标记“帖子”列:

import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

tokenized_train = tokenizer.tokenize(train)

但收到此错误:

TypeError: ord() expected a character, but string of length 5 found

我根据pytorch-pretrained-bert GitHub Repo和Youtube vidoe进行了尝试。

我是一位数据科学实习生,完全没有深度学习经验。我只是想以最简单的方式对BERT模型进行试验,以预测多类分类的输出,因此我可以将结果与我们目前正在研究的更简单的文本分类模型进行比较。我正在Google Colab中工作,结果输出应为.csv文件。

我知道这是一个复杂的模型,并且围绕该模型的所有文档和示例都很复杂(例如微调层等),但是对于初学者数据科学家而言,对于简单实施的任何帮助(如果实际上有这样的事情)加上最少的软件工程经验,将不胜感激。

2 个答案:

答案 0 :(得分:2)

简单是一个主观术语。假设您可以使用Tensorflow和keras-bert,则可以使用BERT进行多类文本分类,如下所示:

n_classes = 20
model = load_trained_model_from_checkpoint(
  config_path,
  checkpoint_path,
  training=True,
  trainable=True,
  seq_len=SEQ_LEN,
)

# Add dense layer for classification
inputs = model.inputs[:2]
dense = model.get_layer('NSP-Dense').output
outputs = keras.layers.Dense(units=n_classes, activation='softmax')(dense)
model = keras.models.Model(inputs, outputs)

model.compile(
    RAdam(lr=LR),
    loss='sparse_categorical_crossentropy',
    metrics=['sparse_categorical_accuracy'],
)

history = model.fit(
    train_x,
    train_y,
    epochs=EPOCHS,
    batch_size=BATCH_SIZE,
    validation_split=0.20,
    shuffle=True,
)

以下是指向Multi-class text classification using BERT on 20 Newsgroup Dataset with Fine Tuning

Google Colab GPU实施的完整教程的链接。

退房! https://pysnacks.com/machine-learning/bert-text-classification-with-fine-tuning/#multi-class-text-classification-using-bert

答案 1 :(得分:1)

我建议您从一个简单的BERT分类任务开始,例如,遵循此出色的教程:https://mccormickml.com/2019/07/22/BERT-fine-tuning/

然后您可以通过以下操作进入多标签:https://medium.com/huggingface/multi-label-text-classification-using-bert-the-mighty-transformer-69714fa3fb3d

只有这样,我才建议您在自己的数据集上尝试您的任务。