为什么作业链不能在mapreduce中工作?

时间:2015-07-31 01:45:31

标签: java hadoop mapreduce

我创建了两个作业,我想链接它们,以便在上一个作业完成后执行一个作业。所以我写了下面的代码。但正如我观察到job1正确完成,job2似乎永远不会执行。

public class Simpletask extends Configured implements Tool {
public static enum FileCounters {
    COUNT;
}
public static class TokenizerMapper extends Mapper<Object, Text, IntWritable, Text>{

      public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
          StringTokenizer itr = new StringTokenizer(value.toString());
          while (itr.hasMoreTokens()) {
              String line = itr.nextToken();
              String part[] = line.split(",");
              int id = Integer.valueOf(part[0]);
              int x1 = Integer.valueOf(part[1]);
              int y1 = Integer.valueOf(part[2]);
              int z1 = Integer.valueOf(part[3]);
              int x2 = Integer.valueOf(part[4]);
              int y2 = Integer.valueOf(part[5]);
              int z2 = Integer.valueOf(part[6]);
              int h_v = Hilbert(x1,y1,z1);
              int parti = h_v/10;
             IntWritable partition = new IntWritable(parti);
             Text neuron = new Text();
             neuron.set(line);
             context.write(partition,neuron);
          }
}
public int Hilbert(int x,int y,int z){
          return (int) (Math.random()*20);
      }
  }

public static class IntSumReducer extends Reducer<IntWritable,Text,IntWritable,Text> {

private Text result = new Text();
private MultipleOutputs<IntWritable, Text> mos;
public void setup(Context context) {
    mos = new MultipleOutputs<IntWritable, Text>(context);
}
<K, V> String generateFileName(K k) {
       return "p"+k.toString();
}
public void reduce(IntWritable key,Iterable<Text> values, Context context) throws IOException, InterruptedException {
    String accu = "";
    for (Text val : values) {
        String[] entry=val.toString().split(",");
        String MBR = entry[1];
        accu+=entry[0]+",MBR"+MBR+" ";
    }
    result.set(accu);
    context.getCounter(FileCounters.COUNT).increment(1);
    mos.write(key, result, generateFileName(key));
}
}

public static class RTreeMapper extends Mapper<Object, Text, IntWritable, Text>{
  public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
   System.out.println("WOWOWOWOW RUNNING");// NOTHING SHOWS UP!
  }
  }

public static class RTreeReducer extends Reducer<IntWritable,Text,IntWritable,Text> {
private MultipleOutputs<IntWritable, Text> mos;
Text t = new Text();

public void setup(Context context) {
    mos = new MultipleOutputs<IntWritable, Text>(context);
}
public void reduce(IntWritable key,Iterable<Text> values, Context context) throws IOException, InterruptedException {
    t.set("dsfs");
    mos.write(key, t, "WOWOWOWOWOW"+key.get());
//ALSO, NOTHING IS WRITTEN TO THE FILE!!!!!
}
}
public static class RTreeInputFormat extends TextInputFormat{
 protected boolean isSplitable(FileSystem fs, Path file) {
        return false;
    }
}

public static void main(String[] args) throws Exception {
    if (args.length != 2) {
           System.err.println("Enter valid number of arguments <Inputdirectory>  <Outputlocation>");
           System.exit(0);
          }
          ToolRunner.run(new Configuration(), new Simpletask(), args);
}

@Override
public int run(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "Job1");
    job.setJarByClass(Simpletask.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(IntWritable.class);
    job.setOutputValueClass(Text.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class);
    boolean complete = job.waitForCompletion(true);

    //================RTree Loop============
    int capacity = 3;
    Configuration rconf = new Configuration();
    Job rtreejob = Job.getInstance(rconf, "rtree");
    if(complete){
        int count =  (int) job.getCounters().findCounter(FileCounters.COUNT).getValue();
        System.out.println("File count: "+count);
        String path = null;
        for(int i=0;i<count;i++){
            path = "/Worker/p"+i+"-m-00000";
            System.out.println("Add input path: "+path);
            FileInputFormat.addInputPath(rtreejob, new Path(path));
        }
        System.out.println("Input path done.");
        FileOutputFormat.setOutputPath(rtreejob, new Path("/RTree"));
        rtreejob.setJarByClass(Simpletask.class);
        rtreejob.setMapperClass(RTreeMapper.class);
        rtreejob.setCombinerClass(RTreeReducer.class);
        rtreejob.setReducerClass(RTreeReducer.class);
        rtreejob.setOutputKeyClass(IntWritable.class);
        rtreejob.setOutputValueClass(Text.class);
        rtreejob.setInputFormatClass(RTreeInputFormat.class);
        complete = rtreejob.waitForCompletion(true);
}
    return 0;
}
}

2 个答案:

答案 0 :(得分:1)

对于mapreduce作业,输出目录不应存在。它将首先检查输出目录。如果存在,则作业将失败。在您的情况下,您为两个作业指定了相同的输出目录。我修改了你的代码。我在job2中将args [1]更改为args [2]。现在第三个参数将是第二个作业的输出目录。所以也传递第三个论点。

curl -X GET https://forge.laravel.com/servers/xxx/sites/xxx/deploy/http?token=xxx

答案 1 :(得分:1)

一些可能的错误原因:

  1. conf被声明两次(那里没有编译错误?)
  2. job2的输出路径已经存在,因为它是从job1创建的(+1到Amal G Jose's answer
  3. 我认为您还应该使用job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(IntWritable.class);来完成这两项工作。
  4. 您是否还有一个命令在您发布的代码段后执行job2?我的意思是,你实际上是在运行job2.waitForCompletion(true);,还是类似的东西?
  5. 总体而言:检查日志中的错误消息,这应该清楚地解释出现了什么问题。