我正在尝试使用Stanford依赖解析器edu/stanford/nlp/models/parser/nndep/CTB_CoNLL_params.txt.gz
解析CoNLL格式的中文数据,但我似乎有一些编码困难。
我的输入文件是utf-8,已经被分割成不同的单词,一个句子如下:那时的坎纳里鲁夫,有着西海岸最大的工业化罐头工厂。
我用来运行模型的命令是:
java -mx2200m -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLP \
-language Chinese \
-encoding utf-8 \
-props StanfordCoreNLP-chinese.properties \
-annotators tokenize,ssplit,pos,depparse \
-file ./ChineseCorpus/ChineseTestSegmented.txt \
-outputFormat conll \
除了没有正确编码中文字符外,这一切似乎都运行正常,这是我得到的输出:
1 ?? _ NT _ 2 DEP
2 ? _ DEG _ 4 NMOD
3 ??? _ NR _ 4 NMOD
4 ?? _ NR _ 6 SUB
5 ? _ PU _ 6 P
6 ?? _ VE _ 0 ROOT
7 ??? _ NN _ 12 NMOD
8 ?? _ JJ _ 9 DEP
9 ? _ DEG _ 12 NMOD
10 ??? _ NN _ 12 NMOD
11 ?? _ NN _ 12 NMOD
12 ?? _ NN _ 6 OBJ
13 ? _ PU _ 6 P
根据Stanford解析器常见问题,中文的标准编码是GB18030,但他们也说“但是,解析器能够解析任何编码中的文本,只要在命令行上传递正确的编码选项”,我这样做了。
我看过这个问题:How to use Stanford LexParser for Chinese text?但他们使用iconv的解决方案对我不起作用,我收到错误cannot convert
并且我一直在尝试几种可能的编码组合。
任何人对出了什么问题的建议?
答案 0 :(得分:0)
尝试类似:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLP \
-language Chinese -props StanfordCoreNLP-chinese.properties \
-annotators segment,ssplit,pos,parse -file chinese-in.txt -outputFormat conll
E.g:
alvas@ubi:~/stanford-corenlp-full-2015-12-09$ cat chinese-in.txt
那时的坎纳里鲁夫,有着西海岸最大的工业化罐头工厂。
alvas@ubi:~/jose-stanford/stanford-corenlp-full-2015-12-09$ \
> java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLP \
> -language Chinese -props StanfordCoreNLP-chinese.properties \
> -annotators segment,ssplit,pos,parse -file chinese-in.txt -outputFormat conll
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Registering annotator segment with class edu.stanford.nlp.pipeline.ChineseSegmenterAnnotator
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator segment
Loading classifier from edu/stanford/nlp/models/segmenter/chinese/ctb.gz ... [main] INFO edu.stanford.nlp.wordseg.ChineseDictionary - Loading Chinese dictionaries from 1 file:
[main] INFO edu.stanford.nlp.wordseg.ChineseDictionary - edu/stanford/nlp/models/segmenter/chinese/dict-chris6.ser.gz
[main] INFO edu.stanford.nlp.wordseg.ChineseDictionary - Done. Unique words in ChineseDictionary is: 423200.
done [14.4 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/chinese-distsim/chinese-distsim.tagger ... done [1.4 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[main] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/chineseFactored.ser.gz ...
done [5.2 sec].
Processing file /home/alvas/jose-stanford/stanford-corenlp-full-2015-12-09/chinese-in.txt ... writing to /home/alvas/jose-stanford/stanford-corenlp-full-2015-12-09/chinese-in.txt.conll
Annotating file /home/alvas/jose-stanford/stanford-corenlp-full-2015-12-09/chinese-in.txt
[main] INFO edu.stanford.nlp.wordseg.TagAffixDetector - INFO: TagAffixDetector: useChPos=false | useCTBChar2=true | usePKChar2=false
[main] INFO edu.stanford.nlp.wordseg.TagAffixDetector - INFO: TagAffixDetector: building TagAffixDetector from edu/stanford/nlp/models/segmenter/chinese/dict/character_list and edu/stanford/nlp/models/segmenter/chinese/dict/in.ctb
[main] INFO edu.stanford.nlp.wordseg.CorpusChar - Loading character dictionary file from edu/stanford/nlp/models/segmenter/chinese/dict/character_list
[main] INFO edu.stanford.nlp.wordseg.affDict - Loading affix dictionary from edu/stanford/nlp/models/segmenter/chinese/dict/in.ctb
done.
Annotation pipeline timing information:
ChineseSegmenterAnnotator: 0.2 sec.
WordsToSentencesAnnotator: 0.0 sec.
POSTaggerAnnotator: 0.0 sec.
ParserAnnotator: 0.9 sec.
TOTAL: 1.2 sec. for 13 tokens at 11.0 tokens/sec.
Pipeline setup: 21.1 sec.
Total time for StanfordCoreNLP pipeline: 22.3 sec.
[out]: