地图任务花费了多少时间"包含在Hadoop上?

时间:2017-01-25 08:30:03

标签: performance hadoop mapreduce

Hadoop作业成功后,会显示各种计数器的摘要,请参阅下面的示例。我的问题是Total time spent by all map tasks计数器中包含的内容,特别是在映射器作业不是节点本地的情况下,是否包含数据复制时间?

17/01/25 09:06:12 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=2941
                FILE: Number of bytes written=241959
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=3251
                HDFS: Number of bytes written=2051
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=23168
                Total time spent by all reduces in occupied slots (ms)=4957
                Total time spent by all map tasks (ms)=5792
                Total time spent by all reduce tasks (ms)=4957
                Total vcore-milliseconds taken by all map tasks=5792
                Total vcore-milliseconds taken by all reduce tasks=4957
                Total megabyte-milliseconds taken by all map tasks=23724032
                Total megabyte-milliseconds taken by all reduce tasks=5075968
        Map-Reduce Framework
                Map input records=9
                Map output records=462
                Map output bytes=4986
                Map output materialized bytes=2941
                Input split bytes=109
                Combine input records=462
                Combine output records=221
                Reduce input groups=221
                Reduce shuffle bytes=2941
                Reduce input records=221
                Reduce output records=221
                Spilled Records=442
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=84
                CPU time spent (ms)=2090
                Physical memory (bytes) snapshot=471179264
                Virtual memory (bytes) snapshot=4508950528
                Total committed heap usage (bytes)=326631424
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=3142
        File Output Format Counters
                Bytes Written=2051

1 个答案:

答案 0 :(得分:1)

我认为数据复制时间已包含在Total time spent by all map tasks指标中。

首先,如果检查服务器端代码(主要与资源管理相关),您可以看到MILLIS_MAPS常量(对应于您所指的度量),在{{1 } class,占用任务尝试的持续时间。当容器启动并即将开始执行时,任务尝试launchTime被设置(从我的源代码知识来看,似乎这两个组件都没有在此时移动任何数据,只传递拆分元数据)。

现在,当容器启动时,TaskAttempImpl正在打开一个InputFormat,它负责获取Mapper需要开始处理的数据(此时你有不同的文件系统,流可以是附上,但看看InputStream)。您可以查看DistributedFileSystem方法中执行的步骤:

MapTask.runNewMapper(...)

(我在Hadoop 2.6上)