我正在尝试使用nltk提供的Stanford Neural Dependency Parser。我遇到的问题是,当我致电st = nltk.parse.stanford.StanfordNeuralDependencyParser()
时,我收到以下错误:
>>> st = nltk.parse.stanford.StanfordNeuralDependencyParser()
Traceback (most recent call last):
File "C:\Users\<user>\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-ca2dec4f3c1f>", line 1, in <module>
st = nltk.parse.stanford.StanfordNeuralDependencyParser()
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 378, in __init__
super(StanfordNeuralDependencyParser, self).__init__(*args, **kwargs)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 51, in __init__
key=lambda model_name: re.match(self._JAR, model_name)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\internals.py", line 714, in find_jar_iter
raise LookupError('\n\n%s\n%s\n%s' % (div, msg, div))
LookupError:
===========================================================================
NLTK was unable to find stanford-corenlp-(\d+)(\.(\d+))+\.jar! Set
the CLASSPATH environment variable.
For more information, on stanford-corenlp-(\d+)(\.(\d+))+\.jar, see:
<http://nlp.stanford.edu/software/lex-parser.shtml>
===========================================================================
但是,当我运行os.environ.get('CLASSPATH')
时,我得到了结果
`C:\nltk_data\;C:\nltk_data\stanford\;C:\nltk_data\stanford\stanford-ner\`
我知道我在corenlp
中有C:\nltk_data\stanford\
jar文件,所以我运行以下操作并最终出现略有不同的错误。
>>> st = nltk.parse.stanford.StanfordNeuralDependencyParser('C:\\nltk_data\\stanford\\')
Traceback (most recent call last):
File "C:\Users\<user>\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-22-28d797d702d9>", line 1, in <module>
st = StanfordNeuralDependencyParser('C:\\nltk_data\\stanford\\')
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 378, in __init__
super(StanfordNeuralDependencyParser, self).__init__(*args, **kwargs)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 51, in __init__
key=lambda model_name: re.match(self._JAR, model_name)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\internals.py", line 635, in find_jar_iter
(name_pattern, path_to_jar))
LookupError: Could not find stanford-corenlp-(\d+)(\.(\d+))+\.jar jar file at C:\nltk_data\stanford\
我从Stanford NLP网站下载了jar stanford-english-corenlp-2016-01-10-models.jar ,并将其重命名为 stanford-corenlp-2016-01 -10.jar 尝试匹配模式,但我仍然遇到相同的错误。我还下载了Stanford Parser 3.6.0版,但它不包含任何corenlp文件。
有没有办法让这个工作,或者我误解了什么?