Stanford NER Tagger和NLTK - 无法正常工作[OSError:Java命令失败]

时间:2018-05-31 11:07:28

标签: nltk stanford-nlp ner

尝试从jupyter笔记本运行Stanford NER Taggerand NLTK。 我一直在

OSError: Java command failed

我已经尝试过攻击了     https://gist.github.com/alvations/e1df0ba227e542955a8a 和线程     Stanford Parser and NLTK

我正在使用

NLTK==3.3
Ubuntu==16.04LTS 

这是我的python代码:

Sample_text = "Google, headquartered in Mountain View, unveiled the new Android phone"

sentences = sent_tokenize(Sample_text)
tokenized_sentences = [word_tokenize(sentence) for sentence in sentences]

PATH_TO_GZ = '/home/root/english.all.3class.caseless.distsim.crf.ser.gz'

PATH_TO_JAR = '/home/root/stanford-ner.jar'

sn_3class = StanfordNERTagger(PATH_TO_GZ,
                       path_to_jar=PATH_TO_JAR,
                              encoding='utf-8')

annotations = [sn_3class.tag(sent) for sent in tokenized_sentences]

我使用以下命令获取这些文件:

wget http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-parser-full-2015-04-20.zip
# Extract the zip file.
unzip stanford-ner-2015-04-20.zip 
unzip stanford-parser-full-2015-04-20.zip 
unzip stanford-postagger-full-2015-04-20.zip

我收到以下错误:

CRFClassifier invoked on Thu May 31 15:56:19 IST 2018 with arguments:
   -loadClassifier /home/root/english.all.3class.caseless.distsim.crf.ser.gz -textFile /tmp/tmpMDEpL3 -outputFormat slashTags -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerOptions "tokenizeNLs=false" -encoding utf-8
tokenizerFactory=edu.stanford.nlp.process.WhitespaceTokenizer
Unknown property: |tokenizerFactory|
tokenizerOptions="tokenizeNLs=false"
Unknown property: |tokenizerOptions|
loadClassifier=/home/root/english.all.3class.caseless.distsim.crf.ser.gz
encoding=utf-8
Unknown property: |encoding|
textFile=/tmp/tmpMDEpL3
outputFormat=slashTags
Loading classifier from /home/root/english.all.3class.caseless.distsim.crf.ser.gz ... Error deserializing /home/root/english.all.3class.caseless.distsim.crf.ser.gz
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1380)
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1331)
    at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:2315)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
    at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2164)
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1249)
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1366)
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1377)
    ... 2 more

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-15-5621d0f8177d> in <module>()
----> 1 ne_annot_sent_3c = [sn_3class.tag(sent) for sent in tokenized_sentences]

/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag(self, tokens)
     79     def tag(self, tokens):
     80         # This function should return list of tuple rather than list of list
---> 81         return sum(self.tag_sents([tokens]), [])
     82 
     83     def tag_sents(self, sentences):

/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag_sents(self, sentences)
    102         # Run the tagger and get the output
    103         stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
--> 104                                        stdout=PIPE, stderr=PIPE)
    105         stanpos_output = stanpos_output.decode(encoding)
    106 

/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/__init__.pyc in java(cmd, classpath, stdin, stdout, stderr, blocking)
    134     if p.returncode != 0:
    135         print(_decode_stdoutdata(stderr))
--> 136         raise OSError('Java command failed : ' + str(cmd))
    137 
    138     return (stdout, stderr)

OSError: Java command failed : [u'/usr/bin/java', '-mx1000m', '-cp', '/home/root/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/root/english.all.3class.caseless.distsim.crf.ser.gz', '-textFile', '/tmp/tmpMDEpL3', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf-8']

1 个答案:

答案 0 :(得分:0)

下载Stanford Named Entity Recognizer version 3.9.1:请参阅The Stanford NLP website中的“下载”部分。

解压缩并移动2个文件&#34; ner-tagger.jar&#34;和&#34; english.all.3class.distsim.crf.ser.gz&#34;到你的文件夹

在文件夹路径中打开jupyter notebook或ipython提示符并运行以下python代码:

import nltk
from nltk.tag.stanford import StanfordNERTagger

sentence = u"Twenty miles east of Reno, Nev., " \
    "where packs of wild mustangs roam free through " \
    "the parched landscape, Tesla Gigafactory 1 " \
    "sprawls near Interstate 80."

jar = './stanford-ner.jar'

model = './english.all.3class.distsim.crf.ser.gz'

ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')

words = nltk.word_tokenize(sentence)

# Run NER tagger on words
print(ner_tagger.tag(words))

我在NLTK == 3.3和Ubuntu == 16.0.6LTS

上测试了这个