运行Stanford NER标记器时出现OSError

时间:2018-08-07 11:29:22

标签: python stanford-nlp

嗨,我正在运行一个简单的程序,以使用Stanford NER标记器获取句子的标记,并且得到OSError: [Errno 12] Cannot allocate memory。知道为什么吗?我知道文字是英文的,模型是西班牙文的。我只是对其进行测试,以查看代码是否有效。

这是我当前拥有的代码:

import nltk
from nltk.tag.stanford import StanfordNERTagger

sentence = "Nicolaus Saliba of Zabbar sells, with right of repurchase, to Nicolaus Cassar, alias Caylun, an inhabitant of the Castrum Maris, a field and its cistern called Tal-Blat situated in Birgu, the suburb of the same Castrum, for ten uncie of Maltese money which Cassar promises to pay on the following Christmas."

jar = 'stanford-ner.jar'
model = 'stanford-spanish-corenlp-2018-02-27-models.jar'
# Prepare NER tagger with english model
ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')
# Tokenize: Split sentence into words
words = nltk.word_tokenize(sentence)

# Run NER tagger on words
print(ner_tagger.tag(words))

0 个答案:

没有答案