NLTK
文档在此集成中相当差。我followed的步骤是:
将http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip下载到/home/me/stanford
将http://nlp.stanford.edu/software/stanford-spanish-corenlp-2015-01-08-models.jar下载到/home/me/stanford
然后在ipython
控制台中:
在[11]中:import nltk
In [12]: nltk.__version__
Out[12]: '3.1'
In [13]: from nltk.tag import StanfordNERTagger
然后
st = StanfordNERTagger('/home/me/stanford/stanford-postagger-full-2015-04-20.zip', '/home/me/stanford/stanford-spanish-corenlp-2015-01-08-models.jar')
但是当我试图运行它时:
st.tag('Adolfo se la pasa corriendo'.split())
Error: no se ha encontrado o cargado la clase principal edu.stanford.nlp.ie.crf.CRFClassifier
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-14-0c1a96b480a6> in <module>()
----> 1 st.tag('Adolfo se la pasa corriendo'.split())
/home/nanounanue/.pyenv/versions/3.4.3/lib/python3.4/site-packages/nltk/tag/stanford.py in tag(self, tokens)
64 def tag(self, tokens):
65 # This function should return list of tuple rather than list of list
---> 66 return sum(self.tag_sents([tokens]), [])
67
68 def tag_sents(self, sentences):
/home/nanounanue/.pyenv/versions/3.4.3/lib/python3.4/site-packages/nltk/tag/stanford.py in tag_sents(self, sentences)
87 # Run the tagger and get the output
88 stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
---> 89 stdout=PIPE, stderr=PIPE)
90 stanpos_output = stanpos_output.decode(encoding)
91
/home/nanounanue/.pyenv/versions/3.4.3/lib/python3.4/site-packages/nltk/__init__.py in java(cmd, classpath, stdin, stdout, stderr, blocking)
132 if p.returncode != 0:
133 print(_decode_stdoutdata(stderr))
--> 134 raise OSError('Java command failed : ' + str(cmd))
135
136 return (stdout, stderr)
OSError: Java command failed : ['/usr/bin/java', '-mx1000m', '-cp', '/home/nanounanue/Descargas/stanford-spanish-corenlp-2015-01-08-models.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/nanounanue/Descargas/stanford-postagger-full-2015-04-20.zip', '-textFile', '/tmp/tmp6y169div', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf8']
StandfordPOSTagger
注意:我需要这是西班牙语版本。
注意:我在python 3.4.3
答案 0 :(得分:10)
尝试:
# StanfordPOSTagger
from nltk.tag.stanford import StanfordPOSTagger
stanford_dir = '/home/me/stanford/stanford-postagger-full-2015-04-20/'
modelfile = stanford_dir + 'models/english-bidirectional-distsim.tagger'
jarfile = stanford_dir + 'stanford-postagger.jar'
st = StanfordPOSTagger(model_filename=modelfile, path_to_jar=jarfile)
# NERTagger
stanford_dir = '/home/me/stanford/stanford-ner-2015-04-20/'
jarfile = stanford_dir + 'stanford-ner.jar'
modelfile = stanford_dir + 'classifiers/english.all.3class.distsim.crf.ser.gz'
st = StanfordNERTagger(model_filename=modelfile, path_to_jar=jarfile)
有关使用Stanford工具的NLTK API的详细信息,请查看:https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software#stanford-tagger-ner-tokenizer-and-parser
注意: NLTK API适用于各个Stanford工具,如果您使用的是Stanford Core NLP,最好遵循http://www.eecs.qmul.ac.uk/~dm303/stanford-dependency-parser-nltk-and-anaconda.html上的@dimazest说明
至于西班牙语NER标记,我强烈建议您使用Stanford Core NLP(http://nlp.stanford.edu/software/corenlp.shtml)而不是使用Stanford NER包(http://nlp.stanford.edu/software/CRF-NER.shtml)。并按照@dimazest解决方案进行JSON文件读取。
或者,如果您必须使用NER包,则可以尝试按照https://github.com/alvations/nltk_cli中的说明进行操作(免责声明:此回购协议与NLTK无法正式关联)。在unix命令行上执行以下操作:
cd $HOME
wget http://nlp.stanford.edu/software/stanford-spanish-corenlp-2015-01-08-models.jar
unzip stanford-spanish-corenlp-2015-01-08-models.jar -d stanford-spanish
cp stanford-spanish/edu/stanford/nlp/models/ner/* /home/me/stanford/stanford-ner-2015-04-20/ner/classifiers/
然后在python:
# NERTagger
stanford_dir = '/home/me/stanford/stanford-ner-2015-04-20/'
jarfile = stanford_dir + 'stanford-ner.jar'
modelfile = stanford_dir + 'classifiers/spanish.ancora.distsim.s512.crf.ser.gz'
st = StanfordNERTagger(model_filename=modelfile, path_to_jar=jarfile)
答案 1 :(得分:3)
错误在于为StanfordNerTagger函数编写的参数。
第一个参数应该是模型文件或您正在使用的分类器。您可以在Stanford zip文件中找到该文件。例如:
st = StanfordNERTagger('/home/me/stanford/stanford-postagger-full-2015-04-20/classifier/tagger.ser.gz', '/home/me/stanford/stanford-spanish-corenlp-2015-01-08-models.jar')
答案 2 :(得分:0)
在此示例中,我将标记器下载到/ content文件夹中
cd /content
wget https://nlp.stanford.edu/software/stanford-tagger-4.1.0.zip
unzip stanford-tagger-4.1.0.zip
解压缩后,我在/ content中有一个文件夹stanford-postagger-full-2020-08-06, 这样我就可以将标记器与:
from nltk.tag.stanford import StanfordPOSTagger
stanford_dir = '/content/stanford-postagger-full-2020-08-06'
modelfile = f'{stanford_dir}/models/spanish-ud.tagger'
jarfile = f'{stanford_dir}/stanford-postagger.jar'
st = StanfordPOSTagger(model_filename=modelfile, path_to_jar=jarfile)
要检查一切正常,我们可以这样做:
>st.tag(["Juan","Medina","es","un","ingeniero"])
>[('Juan', 'PROPN'),
('Medina', 'PROPN'),
('es', 'AUX'),
('un', 'DET'),
('ingeniero', 'NOUN')]
在这种情况下,必须分别下载NER核心和西班牙模型。
cd /content
#download NER core
wget https://nlp.stanford.edu/software/stanford-ner-4.0.0.zip
unzip stanford-ner-4.0.0.zip
#download spanish models
wget http://nlp.stanford.edu/software/stanford-spanish-corenlp-2018-02-27-models.jar
unzip stanford-spanish-corenlp-2018-02-27-models.jar -d stanford-spanish
#copy only the necessary files
cp stanford-spanish/edu/stanford/nlp/models/ner/* stanford-ner-4.0.0/classifiers/
rm -rf stanford-spanish stanford-ner-4.0.0.zip stanford-spanish-corenlp-2018-02-27-models.jar
要在python上使用它:
from nltk.tag.stanford import StanfordNERTagger
stanford_dir = '/content/stanford-ner-4.0.0/'
jarfile = f'{stanford_dir}/stanford-ner.jar'
modelfile = f'{stanford_dir}/classifiers/spanish.ancora.distsim.s512.crf.ser.gz'
st = StanfordNERTagger(model_filename=modelfile, path_to_jar=jarfile)
要检查一切正常,我们可以这样做:
>st.tag(["Juan","Medina","es","un","ingeniero"])
>[('Juan', 'PERS'),
('Medina', 'PERS'),
('es', 'O'),
('un', 'O'),
('ingeniero', 'O')]