Hadoop:中间合并失败

时间:2011-04-07 15:08:05

标签: hadoop mapreduce cloudera

我遇到了一个奇怪的问题。当我在大型数据集(> 1TB压缩文本文件)上运行Hadoop作业时,一些reduce任务失败,堆栈跟踪如下:

java.io.IOException: Task: attempt_201104061411_0002_r_000044_0 - The reduce copier failed
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:240)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.mapred.Child.main(Child.java:234)
Caused by: java.io.IOException: Intermediate merge failed
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2714)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2639)
Caused by: java.lang.RuntimeException: java.io.EOFException
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
    at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373)
    at org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
    at org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
    at org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:335)
    at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350)
    at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:156)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2698)
    ... 1 more
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at com.__.hadoop.pixel.segments.IpCookieCountFilter$IpAndIpCookieCount.readFields(IpCookieCountFilter.java:241)
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:125)
    ... 8 more
java.io.IOException: Task: attempt_201104061411_0002_r_000056_0 - The reduce copier failed
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:240)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.mapred.Child.main(Child.java:234)
Caused by: java.io.IOException: Intermediate merge failed
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2714)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2639)
Caused by: java.lang.RuntimeException: java.io.EOFException
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
    at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373)
    at org.apache.hadoop.util.PriorityQueue.upHeap(PriorityQueue.java:123)
    at org.apache.hadoop.util.PriorityQueue.put(PriorityQueue.java:50)
    at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:447)
    at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:381)
    at org.apache.hadoop.mapred.Merger.merge(Merger.java:107)
    at org.apache.hadoop.mapred.Merger.merge(Merger.java:93)
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2689)
    ... 1 more
Caused by: java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:180)
    at org.apache.hadoop.io.Text.readString(Text.java:402)
    at com.__.hadoop.pixel.segments.IpCookieCountFilter$IpAndIpCookieCount.readFields(IpCookieCountFilter.java:240)
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:122)
    ... 9 more

并非所有减速机都失败了。在我看到其他人失败之前,有些人经常成功。如您所见,堆栈跟踪似乎始终来自IPAndIPCookieCount.readFields()并始终位于内存中合并阶段,但并非总是来自readFields的同一部分。

当在较小的数据集上运行时(约为大小的1/30),此作业会成功。输出的输出几乎与作业的输入一样多,但每个输出记录都较短。这项工作基本上是次要排序的实现。

我们正在使用CDH3 Hadoop发行版。

以下是我的自定义WritableComparable实施:

public static class IpAndIpCookieCount implements WritableComparable<IpAndIpCookieCount> {

        private String ip;
        private int ipCookieCount;

        public IpAndIpCookieCount() {
            // empty constructor for hadoop
        }

        public IpAndIpCookieCount(String ip, int ipCookieCount) {
            this.ip = ip;
            this.ipCookieCount = ipCookieCount;
        }

        public String getIp() {
            return ip;
        }

        public int getIpCookieCount() {
            return ipCookieCount;
        }

        @Override
        public void readFields(DataInput in) throws IOException {
            ip = Text.readString(in);
            ipCookieCount = in.readInt();
        }

        @Override
        public void write(DataOutput out) throws IOException {
            Text.writeString(out, ip);
            out.writeInt(ipCookieCount);
        }

        @Override
        public int compareTo(IpAndIpCookieCount other) {
            int firstComparison = ip.compareTo(other.getIp());
            if (firstComparison == 0) {
                int otherIpCookieCount = other.getIpCookieCount();
                if (ipCookieCount == otherIpCookieCount) {
                    return 0;
                } else {
                    return ipCookieCount < otherIpCookieCount ? 1 : -1;
                }
            } else {
                return firstComparison;
            }
        }

        @Override
        public boolean equals(Object o) {
            if (o instanceof IpAndIpCookieCount) {
                IpAndIpCookieCount other = (IpAndIpCookieCount) o;
                return ip.equals(other.getIp()) && ipCookieCount == other.getIpCookieCount();
            } else {
                return false;
            }
        }

        @Override
        public int hashCode() {
            return ip.hashCode() ^ ipCookieCount;
        }

    }

readFields方法非常简单,我在这个类中看不到任何问题。另外,我看到其他人的堆栈跟踪基本相同:

似乎没有人真正弄清楚这背后的问题。最后两个似乎暗示这可能是一个内存问题(尽管这些堆栈跟踪不是OutOfMemoryException s)。就像链接列表中的第二个到最后一个帖子一样,我已经尝试将减速器的数量设置得更高(高达999),但我仍然会失败。我还没有尝试分配更多内存来减少任务,因为这需要我们重新配置我们的集群。

这是Hadoop中的错误吗?或者我做错了什么?

编辑:我的数据按日分区。如果我运行7次,每天一次,所有7次完成。如果我在所有7天内完成一份工作,那就失败了。所有7天的大型报告都会看到与小型报告完全相同的密钥(总计),但显然不是相同的顺序,相同的减速器等。

1 个答案:

答案 0 :(得分:1)

我认为这是Cloudera向CDH3 MAPREDUCE-947后退的神器。该补丁导致成功创建_SUCCESS文件。

  

此外,还会在输出文件夹中为成功作业创建_SUCCESS文件。配置参数mapreduce.fileoutputcommitter.marksuccessfuljobs可以设置为false以禁用_SUCCESS文件的创建,或者设置为true以启用_SUCCESS文件的创建。

查看您的错误,

Caused by: java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:180)

并将其与之前我遇到的错误进行比较,

Exception in thread "main" java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:180)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1465)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1437)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
    at org.apache.hadoop.mapred.SequenceFileOutputFormat.getReaders(SequenceFileOutputFormat.java:89)
    at org.apache.nutch.crawl.CrawlDbReader.processStatJob(CrawlDbReader.java:323)
    at org.apache.nutch.crawl.CrawlDbReader.main(CrawlDbReader.java:511)

并在Mahout mailing list

Exception in thread "main" java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:180)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1457)
    at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1435)
    at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
    at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
    at
org.apache.mahout.df.mapreduce.partial.Step0Job.parseOutput(Step0Job.java:145)
    at
org.apache.mahout.df.mapreduce.partial.Step0Job.run(Step0Job.java:119)
    at
org.apache.mahout.df.mapreduce.partial.PartialBuilder.parseOutput(PartialBuilder.java:115)
    at org.apache.mahout.df.mapreduce.Builder.build(Builder.java:338)
    at
org.apache.mahout.df.mapreduce.BuildForest.buildForest(BuildForest.java:195)

似乎DataInputStream.readFully被此文件阻塞。

我建议将mapreduce.fileoutputcommitter.marksuccessfuljobs设置为false并重试您的工作 - 它应该可以正常工作。