如何在Hadoop上的OpenNLP中培训意大利语模型?

时间:2015-05-29 17:42:30

标签: java hadoop nlp opennlp linguistics

我想在Hadoop上为意大利语实现自然语言处理算法

我有2个问题;

  1. 我如何找到意大利语的词干算法
  2. 如何整合hadoop
  3. 这是我的代码

    String pathSent=...tagged sentences...;
    String pathChunk=....chunked train path....;
    File fileSent=new File(pathSent);
    File fileChunk=new File(pathChunk);
    InputStream inSent=null;
    InputStream inChunk=null;
    
    inSent = new FileInputStream(fileSent);
    inChunk = new FileInputStream(fileChunk);
    POSModel posModel=POSTaggerME.train("it", new WordTagSampleStream((
    new InputStreamReader(inSent))), ModelType.MAXENT, null, null, 3, 3);
    
    ObjectStream stringStream =new PlainTextByLineStream(new InputStreamReader(inChunk));
    ObjectStream chunkStream = new ChunkSampleStream(stringStream);
    ChunkerModel chunkModel=ChunkerME.train("it",chunkStream ,1, 1);
    this.tagger= new POSTaggerME(posModel);
    this.chunker=new ChunkerME(chunkModel);
    
    
    inSent.close();
    inChunk.close();
    

1 个答案:

答案 0 :(得分:0)

你需要一个语法句子引擎:

"io voglio andare a casa"

io, sostantivo
volere, verbo
andare, verbo
a, preposizione semplice
casa, oggetto

如果您标记了句子,则可以教授OpenNLP。

在Hadoop上创建自定义地图

 public class Map extends Mapper<longwritable,
                            intwritable="" text,=""> {  

           private final static IntWritable one =
                           new IntWritable(1);  
          private Text word = new Text();    

          @Override  public void map(LongWritable key, Text value,
                      Context context)
      throws IOException, InterruptedException {

            //your code here
       } 
  }

在Hadoop上创建自定义缩减

public class Reduce extends Reducer<text,
              intwritable,="" intwritable="" text,=""> {
 @Override
 protected void reduce(
   Text key,
   java.lang.Iterable<intwritable> values,
   org.apache.hadoop.mapreduce.Reducer<text,
           intwritable,="" intwritable="" text,="">.Context context)
   throws IOException, InterruptedException {
       // your reduce here
 }
}

配置两者

public static void main(String[] args)
                      throws Exception {
  Configuration conf = new Configuration();

  Job job = new Job(conf, "opennlp");
  job.setJarByClass(CustomOpenNLP.class);

  job.setOutputKeyClass(Text.class);
  job.setOutputValueClass(IntWritable.class);

  job.setMapperClass(Map.class);
  job.setReducerClass(Reduce.class);

  job.setInputFormatClass(TextInputFormat.class);
  job.setOutputFormatClass(TextOutputFormat.class);

  FileInputFormat.addInputPath(job, new Path(args[0]));
  FileOutputFormat.setOutputPath(job, new Path(args[1]));

  job.waitForCompletion(true);
}