python中的依赖项解析

时间:2018-08-25 20:33:59

标签: nlp stanford-nlp

我是依赖解析的新手,也许这个错误可能很容易修复。我试图对语句执行依赖项解析,以便找出特定单词的焦点。我找到以下链接:

How do I do dependency parsing in NLTK?

如果我尝试上面的链接,那么我会得到

Multiple representations of the same entity [com.tdk.backend.persistence.domain.backend.Role#1] are being merged. Detached: [com.tdk.backend.persistence.domain.backend.Role@5295d3de]; Detached: [com.tdk.backend.persistence.domain.backend.Role@2b3d9d32]

我尝试的第二个链接是:

https://github.com/Lynten/stanford-corenlp

以下是我的实现:

LookupError: 

===========================================================================
NLTK was unable to find the java file!
Use software specific configuration paramaters or set the JAVAHOME environment variable.
===========================================================================

但是我得到from pycorenlp import StanfordCoreNLP nlp = StanfordCoreNLP('http://localhost:9000') sentence='I shot an elephant in my sleep' print 'Dependency Parsing:', nlp.dependency_parse(sentence) nlp.close()

如果还有其他更简单的方法,那么我也欢迎他们。 谢谢。

0 个答案:

没有答案