无限循环org.apache.hadoop.mapred.TaskTracker

时间:2011-12-21 00:28:32

标签: hadoop

我正在运行一个简单的hadoop应用程序,它从64MB文件中收集信息,地图快速完成,大约在大约2到5分钟内,然后减少也快速运行高达16%,然后它就停止了。

这是程序输出

11/12/20 17:53:46 INFO input.FileInputFormat: Total input paths to process : 1
11/12/20 17:53:46 INFO mapred.JobClient: Running job: job_201112201749_0001
11/12/20 17:53:47 INFO mapred.JobClient:  map 0% reduce 0%
11/12/20 17:54:06 INFO mapred.JobClient:  map 4% reduce 0%
11/12/20 17:54:09 INFO mapred.JobClient:  map 15% reduce 0%
11/12/20 17:54:12 INFO mapred.JobClient:  map 28% reduce 0%
11/12/20 17:54:15 INFO mapred.JobClient:  map 40% reduce 0%
11/12/20 17:54:18 INFO mapred.JobClient:  map 53% reduce 0%
11/12/20 17:54:21 INFO mapred.JobClient:  map 64% reduce 0%
11/12/20 17:54:24 INFO mapred.JobClient:  map 77% reduce 0%
11/12/20 17:54:27 INFO mapred.JobClient:  map 89% reduce 0%
11/12/20 17:54:30 INFO mapred.JobClient:  map 98% reduce 0%
11/12/20 17:54:33 INFO mapred.JobClient:  map 100% reduce 0%
11/12/20 17:54:54 INFO mapred.JobClient:  map 100% reduce 8%
11/12/20 17:54:57 INFO mapred.JobClient:  map 100% reduce 16%

在数据节点日志中,我一次又一次看到大量相同的消息,以下是有启动的,

2011-12-20 17:54:51,353 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.083333336% reduce > copy (1 of 4 at 9.01 MB/s) >
2011-12-20 17:54:51,507 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 127.0.1.1:50060, dest: 127.0.0.1:44367, bytes: 75623263, op: MAPRED_SHUFFLE, cliID: attempt_201112201749_0001_m_000000_0, duration: 2161793492
2011-12-20 17:54:54,389 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >
2011-12-20 17:54:57,433 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >
2011-12-20 17:55:40,359 INFO org.mortbay.log: org.mortbay.io.nio.SelectorManager$SelectSet@53d3cf JVM BUG(s) - injecting delay3 times
2011-12-20 17:55:40,359 INFO org.mortbay.log: org.mortbay.io.nio.SelectorManager$SelectSet@53d3cf JVM BUG(s) - recreating selector 3 times, canceled keys 72 times
2011-12-20 17:57:51,518 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >
2011-12-20 17:57:57,536 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >
2011-12-20 17:58:03,554 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >

...

直到

2011-12-20 17:54:51,353 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.083333336% reduce > copy (1 of 4 at 9.01 MB/s) >
2011-12-20 17:54:51,507 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 127.0.1.1:50060, dest: 127.0.0.1:44367, bytes: 75623263, op: MAPRED_SHUFFLE, cliID: attempt_201112201749_0001_m_000000_0, duration: 2161793492
2011-12-20 17:54:54,389 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >
2011-12-20 17:54:57,433 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >
2011-12-20 17:55:40,359 INFO org.mortbay.log: org.mortbay.io.nio.SelectorManager$SelectSet@53d3cf JVM BUG(s) - injecting delay3 times
2011-12-20 17:55:40,359 INFO org.mortbay.log: org.mortbay.io.nio.SelectorManager$SelectSet@53d3cf JVM BUG(s) - recreating selector 3 times, canceled keys 72 times
2011-12-20 17:57:51,518 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >
2011-12-20 17:57:57,536 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >
2011-12-20 17:58:03,554 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201112201749_0001_r_000000_0 0.16666667% reduce > copy (2 of 4 at 14.42 MB/s) >

这是代码

package com.bluedolphin;

import java.io.IOException;
import java.util.Iterator;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class MyJob {
    private final static LongWritable one = new LongWritable(1);
    private final static Text word = new Text();

    public static class MyMapClass extends Mapper<LongWritable, Text, Text, LongWritable> {
        public void map(LongWritable key, 
                    Text value, 
                    Context context) throws IOException, InterruptedException {
            String[] citation = value.toString().split(",");
            word.set(citation[0]);

            context.write(word, one);
        }
    }

    public static class MyReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
        private LongWritable result = new LongWritable();
        public void reduce(
                Text key, 
                Iterator<LongWritable> values, 
                Context context) throws IOException, InterruptedException {
            int sum = 0;

            while (values.hasNext()) {
                sum += values.next().get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }


     public static void main(String[] args) throws Exception {
            Configuration conf = new Configuration();
            String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
            if (otherArgs.length != 2) {
              System.err.println("Usage: myjob <in> <out>");
              System.exit(2);
            }
            Job job = new Job(conf, "patent citation");
            job.setJarByClass(MyJob.class);
            job.setMapperClass(MyMapClass.class);
           // job.setCombinerClass(MyReducer.class);
           // job.setReducerClass(MyReducer.class);
            job.setNumReduceTasks(0);
            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(LongWritable.class);

            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(LongWritable.class);

            FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
            FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
            System.exit(job.waitForCompletion(true) ? 0 : 1);
          }

}

我不知道如何进一步排除故障。

先谢谢。

4 个答案:

答案 0 :(得分:1)

我想出了解决方案,在reduce方法签名中,我应该使用Iterable,而不是Iterator。因此,实际上没有调用reduce方法。它现在运行正常,但我不知道它之前悬挂的内部原因。

答案 1 :(得分:0)

花5分钟检查以下事项:

  1. reducer代码是否会结束循环

  2. Mapper发出了什么?或换句话说,减速机有什么配对?

  3. 我们可以收集中间(M / R)输出,转到Hadoop MapReduce intermediate output

答案 2 :(得分:0)

我在代码中注意到的一些事情:

由于你正在更新map中的Text对象“word”,并且对于每次调用map和reduce分别在reduce中更新LongWritable对象“result”,你可能不应该将它们声明为final(虽然我不认为这是在这种情况下的问题,因为对象只是改变状态。)

您的代码看起来类似于简单的单词计数,唯一的区别在于您只是在地图中为每条记录发出一个值。你可以消除减少(即,运行一个只有地图的工作),看看你是否得到了你对地图的期望。

答案 3 :(得分:0)

在减少工作期间我也有这个无限循环。经过一天的努力,解决方案最终调整了/ etc / hosts文件。

似乎存在条目“127.0.1.1 your_Machine's_Name”混淆了Hadoop。一个证据就是访问奴隶的残疾:50060,奴隶机器上的taskTracker,来自主机。

只要删除此“127.0.1.1 your_machine's_name”条目并在条目“127.0.0.1 localhost”的末尾添加“your_machine's_name”,我的问题就消失了。

我希望这种观察可以有所帮助。