我正在尝试训练spacy NER模型,我有大约2600个段落的数据,每个段落的长度从200到800个单词不等。我必须添加两个新的实体标签PRODUCT和SPECIFICATION。是的,如果没有最佳选择,这种方法很适合训练?如果可以,那么有人可以建议我适当的混合系数和批次大小的值,而在训练时,损失值应该在任何范围内吗?从现在起,我的损失价值在400-5之间。
def main(model=None, new_model_name='product_details_parser',
output_dir=Path('/xyz_path/'), n_iter=20):
"""Set up the pipeline and entity recognizer, and train the new
entity."""
if model is not None:
nlp = spacy.load(model) # load existing spaCy model
print("Loaded model '%s'" % model)
else:
nlp = spacy.blank('en') # create blank Language class
print("Created blank 'en' model")
# Add entity recognizer to model if it's not in the pipeline
# nlp.create_pipe works for built-ins that are registered with spaCy
if 'ner' not in nlp.pipe_names:
ner = nlp.create_pipe('ner')
nlp.add_pipe(ner)
# otherwise, get it, so we can add labels to it
else:
ner = nlp.get_pipe('ner')
ner.add_label(LABEL) # add new entity label to entity recognizer
if model is None:
optimizer = nlp.begin_training()
else:
# Note that 'begin_training' initializes the models, so it'll zero out
# existing entity types.
optimizer = nlp.entity.create_optimizer()
# get names of other pipes to disable them during training
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
with nlp.disable_pipes(*other_pipes): # only train NER
for itn in range(n_iter):
random.shuffle(ret_data)
losses = {}
# batch up the examples using spaCy's minibatch
batches = minibatch(ret_data, size=compounding(1., 32., 1.001))
for batch in batches:
texts, annotations = zip(*batch)
nlp.update(texts, annotations, sgd=optimizer, drop=0.35,losses=losses)
print('Losses', losses)
if __name__ == '__main__':
plac.call(main)
答案 0 :(得分:0)
您可以从简单的训练方法(https://spacy.io/usage/training#training-simple-style)开始,而不是这种类型的训练。与您的方法相比,这种简单的方法可能会花费一些时间,但会带来更好的结果。