我一直在尝试使用core-nlp共同参考分辨率系统。系统按照教程中的说明工作。以下是相同的代码:
public static void main(String[] args) throws Exception {
Annotation document = new Annotation("Barack Obama was born in Hawaii. He is the president. Obama was elected in 2008.");
Properties props = new Properties();
props.setProperty("annotators", "tokenize,ssplit,pos,lemma,ner,parse,mention,coref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
pipeline.annotate(document);
System.out.println("---");
System.out.println("coref chains");
for (CorefChain cc : document.get(CorefCoreAnnotations.CorefChainAnnotation.class).values()) {
System.out.println("\t" + cc);
}
输出:
CHAIN3-["Barack Obama" in sentence 1, "He" in sentence 1]
我想要的是一张显示
的地图Key | Value
He : Barack Obama
Obama: Barack Obama
是否有内置方法来实现这一点,或者我必须对此进行后期处理(不仅仅是地图)?
答案 0 :(得分:1)
目前还没有真正的代码。这是一个片段,将打印出提及光泽,位置信息和规范提及:
for (CorefChain cc : document.get(CorefCoreAnnotations.CorefChainAnnotation.class).values()) {
CorefChain.CorefMention representativeMention = cc.getRepresentativeMention();
for (CorefChain.CorefMention cm : cc.getMentionsInTextualOrder()) {
String position = "sentence num: "+cm.sentNum+" position: "+cm.startIndex;
System.out.println(cm.mentionSpan + "\t" + position + "\t" + representativeMention.mentionSpan);
}
}