如何使用斯坦福分析器获得德语句子的GrammaticalStructure对象?

时间:2015-08-12 21:41:12

标签: java stanford-nlp

我使用Stanford Parser(版本3.5.2)作为NLP应用程序,该应用程序依赖于依赖关系解析的分析以及来自其他来源的信息。到目前为止,我已经把它用于英语,就像这样:

import java.io.StringReader;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;

import edu.stanford.nlp.ling.HasWord;
import edu.stanford.nlp.ling.TaggedWord;
import edu.stanford.nlp.process.Tokenizer;
import edu.stanford.nlp.parser.lexparser.LexicalizedParser;
import edu.stanford.nlp.trees.GrammaticalStructure;
import edu.stanford.nlp.trees.GrammaticalStructureFactory;
import edu.stanford.nlp.trees.TreebankLanguagePack;
import edu.stanford.nlp.trees.TypedDependency;


/**
* Stanford Parser Wrapper (for Stanford Parser Version 3.5.2).
* 
*/

public class StanfordParserWrapper {

public static void parse(String en, String align, String out) {

// setup stanfordparser
String grammar = "edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz";
String[] options = { "-outputFormat", "wordsAndTags, typedDependencies" };
LexicalizedParser lp = LexicalizedParser.loadModel(grammar, options);
TreebankLanguagePack tlp = lp.getOp().langpack();
GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();

// read document
Iterable<List<? extends HasWord>> sentences;
Reader r = new Reader(en);
String line = null;
List<List<? extends HasWord>> tmp = new ArrayList<List<? extends HasWord>>();
while ((line = r.getNext()) != null) {
    Tokenizer<? extends HasWord> token = tlp.getTokenizerFactory()
        .getTokenizer(new StringReader(line));
    List<? extends HasWord> sentence = token.tokenize();
    tmp.add(sentence);
}
sentences = tmp;

Reader alignment = new Reader(align);
Writer treeWriter = new Writer(out);

// parse
long start = System.currentTimeMillis();
// System.err.print("Parsing sentences ");
int sentID = 0;
for (List<? extends HasWord> sentence : sentences) {
    Tree t = new Tree();
    t.setSentID(++sentID);
    System.out.println("parse Sentence " + t.getSentID() + " "
        + sentence + "...");
    // System.err.print(".");

    edu.stanford.nlp.trees.Tree parse = lp.parse(sentence);

    // ROOT node
    Node root = new Node(true, true);
    t.setNode(root);

    // tagging
    int counter = 0;
    for (TaggedWord tw : parse.taggedYield()) {
    Node n = new Node();
    n.setNodeID(++counter);
    n.setSurface(tw.value());
    n.setTag(tw.tag());
    t.setNode(n);
    }

    t.setSentLength(t.getNodes().size() - 1);

    // labeling
    GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
    List<TypedDependency> tdl = gs.typedDependenciesCCprocessed();
    for (TypedDependency td : tdl) {
    Node dep = t.getNodes().get(td.dep().index());
    Node gov = t.getNodes().get(td.gov().index());
    dep.setLabel(td.reln().toString());
    gov.setChild(dep);
    dep.setParent(gov);
    }

    // combine with alignment
    t.initialize(alignment.readNextAlign());
    treeWriter.write(t);
}
long stop = System.currentTimeMillis();
System.err.println("...done! [" + (stop - start) / 1000 + " sec].");

treeWriter.close();
}

public static void main(String[] args) {
if (args.length == 3) {
    parse(args[0], args[1], args[2]);
} else {
    System.out.println("Usage:");
}
}
}

“节点”和“树”是我自己的类,而不是斯坦福解析器的类。

我的问题是:我怎么能为德语做同样的事情?当我用“edu / stanford / nlp / models / lexparser / germanPCFG.ser.gz”替换英语语法模型时,我得到以下异常:

Exception in thread "main" java.lang.UnsupportedOperationException: No GrammaticalStructureFactory defined for edu.stanford.nlp.trees.international.negra.NegraPennLanguagePack
at edu.stanford.nlp.trees.AbstractTreebankLanguagePack.grammaticalStructureFactory(AbstractTreebankLanguagePack.java:591)
at StanfordParserWrapper.parse(StanfordParserWrapper.java:46)
at StanfordParserWrapper.main(StanfordParserWrapper.java:117)

“germanFactored”模型也是如此。显然,我需要在这里做一些不同的事情,因为德国模型不支持GrammaticalStructureFactory。有没有办法从德语文本中获取GrammaticalStructure,还是我必须完全不同地为德语编写代码?如果是这样,我会感激一些指示,我已经找了很多这个信息,但找不到我想要的东西。

这似乎是相关的:How to parse languages other than English with Stanford Parser? in java, not command lines但是,它只是告诉我中文模型支持GrammaticalStructureFactory,而不是我需要为德语解析做什么。

非常感谢,

Ĵ

1 个答案:

答案 0 :(得分:2)

你不是。斯坦福解析器不支持德语的依赖性分析(这是你从GrammaticalStructureFactory得到的)。

您可以尝试其他依赖解析器。虽然斯坦福大学使用基于规则的组成树转换为依赖树,但替代方案通常是概率性的。

  • mate-tools具有依赖关系解析和德语
  • 模型
  • 您可以使用MaltParser滚动自己(我认为有Tüba D/Z语料库的版本与MaltParser兼容)
  • 或者您可以查看ParZu(但要注意,它是Prolog)