我有一个文本文件,其中包含一组文档ID和文档内容,由“::”分隔。这是一个例子
139::This is a sentence in document 139. This is another sentence.
140::This is a sentence in document 140. This is another sentence.
我想使用StanfordCoreNLP对这些句子进行一些命名实体识别。这与传统的java程序一直很好用。现在我想使用MapReduce做同样的事情。我试图在我的mapper的setup()方法中加载StanfordCoreNLP分类器,map()方法执行命名实体标记,如下所示:
public class NerMapper extends Mapper<LongWritable, Text, Text, Text>{
StanfordCoreNLP pipeline;
@Override
protected void setup(Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
super.setup(context);
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref, relation");
pipeline = new StanfordCoreNLP(props);
}
@Override
protected void map(LongWritable key, Text value,
Mapper<LongWritable, Text, Text, Text>.Context context) throws IOException,
InterruptedException {
// TODO Auto-generated method stub
String[] input = value.toString().split("::");
List<DataTuple> dataTuples = new ArrayList<DataTuple>();
Annotation annotation = new Annotation(input[1]);
pipeline.annotate(annotation);
List<CoreMap> sentences = annotation.get(SentencesAnnotation.class);
for(CoreMap sentence : sentences){
//extract named entities
//write <documentID>::<the named entity itself>::<the named entity tag>
}
}
}
在运行作业时,它失败并显示“超出GC开销限制”错误。我在运行作业之前通过export HADOOP_OPTS="-Xmx892m"
尝试了不同的堆大小,并使用-libjars
命令的hadoop jar
选项包含了StanfordCoreNLP依赖项。输入文档通常只包含4-5个普通大小的句子。我知道问题在于setup()方法中的分类器的初始化,但我无法弄清楚到底出了什么问题。我真的很感激这里的任何帮助!
我正在使用hadoop 2.6.0,Stanford CoreNLP 3.4.1和java 1.7。