关于斯坦福词语segmenter

时间:2017-08-13 17:38:21

标签: python-3.x nltk stanford-nlp

我最近尝试使用Stanford word segmenter来处理Python中的中文数据。但是当我运行分段器时,我遇到了一些问题。这是我在Python中输入的代码:

segmenter = StanfordSegmenter(path_to_jar = '/Applications/Python3.6/stanford-segmenter/stanford-segmenter.jar',
                              path_to_slf4j = '/Applications/Python3.6/stanford-segmenter/slf4j-api-1.7.25.jar',
                              path_to_sihan_corpora_dict = '/Applications/Python 3.6/stanford-segmenter/data',
                              path_to_model = '/Applications/Python 3.6/stanford-segmenter/data/pku.gz',
                              path_to_dict = '/Applications/Python 3.6/stanford-segmenter/data/dict-chris6.ser.gz'
                             )

处理似乎没问题,因为我没有收到任何警告。但是,当我试图在一个句子中分割中文单词时,分段器不起作用。

sentence = u'这是斯坦福中文分词器测试'
segmenter.segment(sentence)

Exception in thread "main" java.lang.UnsupportedClassVersionError: edu/stanford/nlp/ie/crf/CRFClassifier : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:637)
at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

Traceback (most recent call last):
File "<pyshell#21>", line 1, in <module>
segmenter.segment(sentence)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nltk/tokenize/stanford_segmenter.py", line 96, in segment
return self.segment_sents([tokens])
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nltk/tokenize/stanford_segmenter.py", line 123, in segment_sents
stdout = self._execute(cmd)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nltk/tokenize/stanford_segmenter.py", line 143, in _execute
cmd,classpath=self._stanford_jar, stdout=PIPE, stderr=PIPE)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nltk/internals.py", line 134, in java
raise OSError('Java command failed : ' + str(cmd))
OSError: Java command failed : ['/usr/bin/java', '-mx2g', '-cp', '/Applications/Python 3.6/stanford-segmenter/stanford-segmenter.jar:/Applications/Python 3.6/stanford-segmenter/slf4j-api-1.7.25.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-sighanCorporaDict', '/Applications/Python 3.6/stanford-segmenter/data', '-textFile', '/var/folders/j3/52_wq50j75jfk5ybg6krlw_w0000gn/T/tmpz6dqv1yf', '-sighanPostProcessing', 'true', '-keepAllWhitespaces', 'false', '-loadClassifier', '/Applications/Python 3.6/stanford-segmenter/data/pku.gz', '-serDictionary', '/Applications/Python 3.6/stanford-segmenter/data/dict-chris6.ser.gz', '-inputEncoding', 'UTF-8']

我正在使用Python 3.6.2和Mac OS。我想知道我是否错过任何必要的步骤。有人可以分享他们解决这个问题的经验吗?非常感谢你。

1 个答案:

答案 0 :(得分:0)

TL; DR

坚持一段时间,等待NLTK v3.2.5,其中将有一个非常简单的斯坦福标记器接口,可以跨不同语言进行标准化。

v3.2.5中将弃用StanfordSegmenterStanfordTokenizer类,请参阅

首先升级您的nltk版本:

pip install -U nltk

下载并启动Stanford CoreNLP服务器:

wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
wget http://nlp.stanford.edu/software/stanford-chinese-corenlp-2016-10-31-models.jar
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-chinese.properties 

java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-chinese.properties \
-preload tokenize,ssplit,pos,lemma,ner,parse \
-status_port 9001  -port 9001 -timeout 15000

然后在NLTK v3.2.5中:

>>> from nltk.tokenize.stanford import CoreNLPTokenizer
>>> sttok = CoreNLPTokenizer('http://localhost:9001')
>>> sttok.tokenize(u'我家没有电脑。')
['我家', '没有', '电脑', '。']

同时,如果你的NLTK版本是v3.2.4,你可以试试这个:

from nltk.parse.corenlp import CoreNLPParser 
corenlp_parser = CoreNLPParser('http://localhost:9001', encoding='utf8')
result = corenlp_parser.api_call(text, {'annotators': 'tokenize,ssplit'})
tokens = [token['originalText'] or token['word'] for sentence in result['sentences'] for token in sentence['tokens']]
tokens

[OUT]:

['我家', '没有', '电脑', '。']