在for循环中启动作业

时间:2013-11-12 18:54:04

标签: java hadoop mapreduce

我遇到了一个奇怪的问题。我有一个mapreduce类,它在文件中查找模式(模式文件进入DistributedCache)。现在我想重用这个类来运行1000个模式文件。我只需要扩展模式匹配类并覆盖它的main和run函数。在子类的运行中,我修改命令行参数并将它们提供给parent run()函数。一切顺利,直到迭代45-50。突然间,所有的任务工作者都开始失败,直到没有取得进展。我检查了HDFS,但仍有70%的空间。任何人想知道为什么一个接一个地开出50个工作岗位会导致困难吗?

@Override
    public int run(String[] args) throws Exception {

        //-patterns patternsDIR input/ output/

        List<String> files = getFiles(args[1]);
        String inputDataset=args[2];
        String outputDir=args[3];



        for (int i=0; i<files.size(); i++){


            String [] newArgs= new String[4];
            newArgs = modifyArgs(args);
            super.run(newArgs);
        }

        return 0;
    }

编辑:刚检查了作业日志,这是第一次发生错误:

2013-11-12 09:03:01,665 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:java.io.IOException: java.lang.OutOfMemoryError: Java heap space
2013-11-12 09:03:32,971 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201311120807_0053_m_000053_0' has completed task_201311120807_0053_m_000053 successfully.
2013-11-12 09:07:51,717 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:java.io.IOException: java.lang.OutOfMemoryError: Java heap space
2013-11-12 09:08:05,973 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201311120807_0053_m_000128_0' has completed task_201311120807_0053_m_000128 successfully.
2013-11-12 09:08:16,571 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201311120807_0053_m_000130_0' has completed task_201311120807_0053_m_000130 successfully.
2013-11-12 09:08:16,571 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_1595161181_30] for 30 seconds.  Will retry shortly ...
2013-11-12 09:08:27,175 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201311120807_0053_m_000138_0' has completed task_201311120807_0053_m_000138 successfully.
2013-11-12 09:08:25,241 ERROR org.mortbay.log: EXCEPTION 
java.lang.OutOfMemoryError: Java heap space
2013-11-12 09:08:25,241 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54311, call heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@7fcb9c0a, false, false, true, 9834) from 10.1.1.13:55028: error: java.io.IOException: java.lang.OutOfMemoryError: Java heap space
java.io.IOException: java.lang.OutOfMemoryError: Java heap space
    at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:62)
    at java.lang.StringBuilder.<init>(StringBuilder.java:97)
    at org.apache.hadoop.util.StringUtils.escapeString(StringUtils.java:435)
    at org.apache.hadoop.mapred.Counters.escape(Counters.java:768)
    at org.apache.hadoop.mapred.Counters.access$000(Counters.java:52)
    at org.apache.hadoop.mapred.Counters$Counter.makeEscapedCompactString(Counters.java:111)
    at org.apache.hadoop.mapred.Counters$Group.makeEscapedCompactString(Counters.java:221)
    at org.apache.hadoop.mapred.Counters.makeEscapedCompactString(Counters.java:648)
    at org.apache.hadoop.mapred.JobHistory$MapAttempt.logFinished(JobHistory.java:2276)
    at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:2636)
    at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:1222)
    at org.apache.hadoop.mapred.JobTracker.updateTaskStatuses(JobTracker.java:4471)
    at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3306)
    at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:3001)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
2013-11-12 09:08:16,571 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54311, call heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@3269c671, false, false, true, 9841) from 10.1.1.23:42125: error: java.io.IOException: java.lang.OutOfMemoryError: Java heap space
java.io.IOException: java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$Packet.<init>(DFSClient.java:2875)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:3806)
    at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:150)
    at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:132)
    at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:121)
    at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:112)
    at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:86)
    at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:220)
    at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:290)
    at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:294)
    at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:140)
    at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
    at java.io.BufferedWriter.flush(BufferedWriter.java:253)
    at java.io.PrintWriter.flush(PrintWriter.java:293)
    at java.io.PrintWriter.checkError(PrintWriter.java:330)
    at org.apache.hadoop.mapred.JobHistory.log(JobHistory.java:847)
    at org.apache.hadoop.mapred.JobHistory$MapAttempt.logStarted(JobHistory.java:2225)
    at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:2632)
    at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:1222)
    at org.apache.hadoop.mapred.JobTracker.updateTaskStatuses(JobTracker.java:4471)
    at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3306)
    at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:3001)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)

之后我们看到了一堆:

2013-11-12 09:13:48,204 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201311120807_0053_m_000033_0: Lost task tracker: tracker_n144-06b.wall1.ilabt.iminds.be:localhost/127.0.0.1:47567

EDIT2:有些想法吗?

  1. 堆空间错误有点意外,因为映射器几乎不需要任何内存。
  2. 我用super.run()调用基类,我应该使用Toolrunner调用吗?
  3. 在每次迭代中,一个大约1000字+分数的文件被添加到DistributedCache中,我不确定是否应该在某处重置缓存? (super.run()中的每个作业都使用job.waitForCompletion()运行,然后清除缓存吗?)
  4. EDIT3:

    @Donald:我没有为hadoop守护进程重新调整内存大小,所以每个应该有1GB的堆。 maptasks有800 MB的堆,其中450 MB用于io.sort。

    @Chris:我没有修改计数器上的任何东西,我使用的是普通的。有1764个地图任务,每个任务有16个计数器,作业本身将有另外20个左右。这可能确实会在连续50个作业之后累加,但如果您正在运行多个连续作业,我认为它不会存储在堆中?

    @Extra信息:

    1. 地图任务非常快,每个任务只需要3-5秒,而且我有jvm.reuse = -1。映射任务处理具有10条记录的文件(该文件比块大小小得多)。由于文件较小,我可以考虑制作包含100条记录的输入文件,以减少映射开销。
    2. 我尝试的第一件事就是添加一个单位缩减器( 1 reduce task )来减少HDFS中创建的文件数量(否则每个模式会有1个,因此每个作业会有1000个这可能会为数据节点带来开销)
    3. 每个作业的记录数量相当低,我正在寻找1764个文件中的特定单词,并且1000个模式中的一个匹配的数量总共约为5000个地图输出记录
    4. @All:感谢您帮助我们!

0 个答案:

没有答案