如何在Hadoop 2.7.5中获得用户输入?

时间:2018-02-17 03:09:05

标签: java hadoop mapreduce

我试图这样做,以便当用户输入单词时,程序将通过txt文件并计算该单词的所有实例。
我正在使用MapReduce,而且我是新手 我知道有一种非常简单的方法可以解决这个问题,而且我已经尝试了一段时间。

在这段代码中,我试图做到这一点,以便它要求用户输入,程序将通过该文件并查找实例。

我在堆栈溢出时看到了一些代码,有人提到将配置设置为conf.set(" userinput"," Data")会有所帮助。
还有一些更新的方式来输入用户。

我程序中的if语句是输入用户词时的一个例子,它只找到该词。

     import java.util.StringTokenizer;

    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

    public class WordCount {

      public static class TokenizerMapper
           extends Mapper<Object, Text, Text, IntWritable>{

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();


    //So I've seen that this is the correct way of setting it up. 
// However I've heard that there mroe efficeint ways of setting it up as well. 
/*
public void setup(Context context) {
     Configuration config=context.getConfiguration();
     String wordstring=config.get("mapper.word");
     word.setAccessibleHelp(wordstring);
 }
*/


        public void map(Object key, Text value, Context context
                        ) throws IOException, InterruptedException {
          StringTokenizer itr = new StringTokenizer(value.toString());

          while (itr.hasMoreTokens()) {
              if(word=="userinput") {
            word.set(itr.nextToken());
            context.write(word, one);
              }
          }
        }
      }

      public static class IntSumReducer
           extends Reducer<Text,IntWritable,Text,IntWritable> {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values,
                           Context context
                           ) throws IOException, InterruptedException {
          int sum = 0;
          for (IntWritable val : values) {
            sum += val.get();
          }
          result.set(sum);
          context.write(key, result);
        }
      }

      public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();


        Job job = Job.getInstance(conf, "word count");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
      }
    }

1 个答案:

答案 0 :(得分:0)

我不确定安装方法,但是您在命令行传递输入作为参数。

conf.set("mapper.word",args[0]);
Job job =... 
// Notice you now need 3 arguments to run this 
FileInputFormat.addInputPath(job, new Path(args[1]));
FileOutputFormat.setOutputPath(job, new Path(args[2]));

在mapper或reducer中,您可以获取字符串

 Configuration config=context.getConfiguration();
 String wordstring=config.get("mapper.word");

在比较它之前,你需要从tokenizer中获取字符串。您还需要比较字符串,而不是字符串与文本对象

String wordstring=config.get("mapper.word");
while (itr.hasMoreTokens()) {
    String token = itr.nextToken();
    if(wordstring.equals(token)) {
        word.set(token);
        context.write(word, one);
   }