如何使用hadoop mapreduce编程计算文件中特定单词的出现次数?

时间:2013-08-22 13:37:36

标签: java hadoop mapreduce

我正在尝试使用java中的hadoop mapreduce编程来计算文件中特定字的出现次数。文件和单词都应该是用户输入。所以我试图将特定的单词作为第三个参数与i / p和o / p路径(In,Out,Word)一起传递。但是我无法找到将这个词传递给地图功能的方法。 我尝试了以下方法,但它不起作用:   - 在mapper类中创建了一个静态String变量,并为其分配了我的第三个参数(即要搜索的单词)的值。然后尝试在map函数中使用这个静态变量。但是在map函数中,静态变量值为Null。 我无法在地图功能中获得第三个arument的值。

无论如何通过JobConf对象设置值?请帮忙。我在下面粘贴了我的代码。

public class MyWordCount {

    public static class MyWordCountMap extends Mapper < Text, Text, Text, LongWritable > {
        static String wordToSearch;
        private final static LongWritable ONE = new LongWritable(1L);
        private Text word = new Text();
        public void map(Text key, Text value, Context context)
        throws IOException, InterruptedException {
            System.out.println(wordToSearch); // Here the value is coming as Null
            if (value.toString().compareTo(wordToSearch) == 0) {
                context.write(word, ONE);
            }
        }
    }


    public static class SumReduce extends Reducer < Text, LongWritable, Text, LongWritable > {

        public void reduce(Text key, Iterator < LongWritable > values,
            Context context) throws IOException, InterruptedException {
            long sum = 0L;
            while (values.hasNext()) {
                sum += values.next().get();
            }
            context.write(key, new LongWritable(sum));
        }
    }

    public static void main(String[] rawArgs) throws Exception {

        GenericOptionsParser parser = new GenericOptionsParser(rawArgs);
        Configuration conf = parser.getConfiguration();
        String[] args = parser.getRemainingArgs();
        Job job = new Job(conf, "wordcount");
        job.setJarByClass(MyWordCountMap.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);
        job.setMapperClass(MyWordCountMap.class);
        job.setReducerClass(SumReduce.class);
        job.setInputFormatClass(SequenceFileInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        String MyWord = args[2];
        MyWordCountMap.wordToSearch = MyWord;
        job.waitForCompletion(true);
    }

}

1 个答案:

答案 0 :(得分:4)

有一种方法可以使用Configuration执行此操作(请参阅api here)。例如,可以使用以下代码将“Tree”设置为要搜索的单词:

//Create a new configuration
Configuration conf = new Configuration();
//Set the work to be searched
conf.set("wordToSearch", "Tree");
//create the job
Job job = new Job(conf);

然后,在mapper / reducer类中,您可以使用以下代码获取wordToSearch(即本例中的“Tree”):

//Create a new configuration
Configuration conf = context.getConfiguration();
//retrieve the wordToSearch variable
String wordToSearch = conf.get("wordToSearch");

有关详细信息,请参阅here