了解GZ文件的Hadoop行为

时间:2014-08-28 09:06:33

标签: hadoop

我的S3存储桶中有两个单独的文件夹中有一个小JSON文件。我分别使用相同的映射器运行相同的命令。

NORMAL JSON

$ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar -Dmapred.reduce.tasks=0 -file ./mapper.py -mapper ./mapper.py -input s3://mybucket/normaltest -output smalltest-output
14/08/28 08:33:53 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
packageJobJar: [./mapper.py, /mnt/var/lib/hadoop/tmp/hadoop-unjar6225144044327095484/] [] /tmp/streamjob6947060448653690043.jar tmpDir=null
14/08/28 08:33:56 INFO mapred.JobClient: Default number of map tasks: null
14/08/28 08:33:56 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 160
14/08/28 08:33:56 INFO mapred.JobClient: Default number of reduce tasks: 0
14/08/28 08:33:56 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache
14/08/28 08:33:56 INFO mapred.JobClient: Setting group to hadoop
14/08/28 08:33:56 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
14/08/28 08:33:56 WARN lzo.LzoCodec: Could not find build properties file with revision hash
14/08/28 08:33:56 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev UNKNOWN]
14/08/28 08:33:56 WARN snappy.LoadSnappy: Snappy native library is available
14/08/28 08:33:56 INFO snappy.LoadSnappy: Snappy native library loaded
14/08/28 08:33:58 INFO mapred.FileInputFormat: Total input paths to process : 1
14/08/28 08:33:58 INFO streaming.StreamJob: getLocalDirs(): [/mnt/var/lib/hadoop/mapred]
14/08/28 08:33:58 INFO streaming.StreamJob: Running job: job_201408260907_0053
14/08/28 08:33:58 INFO streaming.StreamJob: To kill this job, run:
14/08/28 08:33:58 INFO streaming.StreamJob: /home/hadoop/bin/hadoop job  -Dmapred.job.tracker=10.165.13.124:9001 -kill job_201408260907_0053
14/08/28 08:33:58 INFO streaming.StreamJob: Tracking URL: http://ip-10-165-13-124.ec2.internal:9100/jobdetails.jsp?jobid=job_201408260907_0053
14/08/28 08:33:59 INFO streaming.StreamJob:  map 0%  reduce 0%
14/08/28 08:34:23 INFO streaming.StreamJob:  map 1%  reduce 0%
14/08/28 08:34:26 INFO streaming.StreamJob:  map 2%  reduce 0%
14/08/28 08:34:29 INFO streaming.StreamJob:  map 9%  reduce 0%
14/08/28 08:34:32 INFO streaming.StreamJob:  map 45%  reduce 0%
14/08/28 08:34:35 INFO streaming.StreamJob:  map 56%  reduce 0%
14/08/28 08:34:36 INFO streaming.StreamJob:  map 57%  reduce 0%
14/08/28 08:34:38 INFO streaming.StreamJob:  map 84%  reduce 0%
14/08/28 08:34:39 INFO streaming.StreamJob:  map 85%  reduce 0%
14/08/28 08:34:41 INFO streaming.StreamJob:  map 99%  reduce 0%
14/08/28 08:34:44 INFO streaming.StreamJob:  map 100%  reduce 0%
14/08/28 08:34:50 INFO streaming.StreamJob:  map 100%  reduce 100%
14/08/28 08:34:50 INFO streaming.StreamJob: Job complete: job_201408260907_0053
14/08/28 08:34:50 INFO streaming.StreamJob: Output: smalltest-output

smalltest-output中,我得到几个包含已处理JSON部分的小文件。

GZIPed JSON

$ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar -Dmapred.reduce.tasks=0 -file ./mapper.py -mapper ./mapper.py -input s3://weblablatency/gztest -output smalltest-output
14/08/28 08:39:45 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
packageJobJar: [./mapper.py, /mnt/var/lib/hadoop/tmp/hadoop-unjar2539293594337011579/] [] /tmp/streamjob301144784484156113.jar tmpDir=null
14/08/28 08:39:48 INFO mapred.JobClient: Default number of map tasks: null
14/08/28 08:39:48 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 160
14/08/28 08:39:48 INFO mapred.JobClient: Default number of reduce tasks: 0
14/08/28 08:39:48 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache
14/08/28 08:39:48 INFO mapred.JobClient: Setting group to hadoop
14/08/28 08:39:48 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
14/08/28 08:39:48 WARN lzo.LzoCodec: Could not find build properties file with revision hash
14/08/28 08:39:48 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev UNKNOWN]
14/08/28 08:39:48 WARN snappy.LoadSnappy: Snappy native library is available
14/08/28 08:39:48 INFO snappy.LoadSnappy: Snappy native library loaded
14/08/28 08:39:50 INFO mapred.FileInputFormat: Total input paths to process : 1
14/08/28 08:39:51 INFO streaming.StreamJob: getLocalDirs(): [/mnt/var/lib/hadoop/mapred]
14/08/28 08:39:51 INFO streaming.StreamJob: Running job: job_201408260907_0055
14/08/28 08:39:51 INFO streaming.StreamJob: To kill this job, run:
14/08/28 08:39:51 INFO streaming.StreamJob: /home/hadoop/bin/hadoop job  -Dmapred.job.tracker=10.165.13.124:9001 -kill job_201408260907_0055
14/08/28 08:39:51 INFO streaming.StreamJob: Tracking URL: http://ip-10-165-13-124.ec2.internal:9100/jobdetails.jsp?jobid=job_201408260907_0055
14/08/28 08:39:52 INFO streaming.StreamJob:  map 0%  reduce 0%
14/08/28 08:40:20 INFO streaming.StreamJob:  map 100%  reduce 0%
14/08/28 08:40:26 INFO streaming.StreamJob:  map 100%  reduce 100%
14/08/28 08:40:26 INFO streaming.StreamJob: Job complete: job_201408260907_0055

在smalltest-output中,我得到一个正确解析的文件,但是作为单个文件。

为什么会出现这种差异以及发生了什么? gz案件中我的工作没有正确分发吗?

在我的实际使用案例中,我需要处理〜2000 gz文件,总计约4GB未压缩;每4小时一次。因为压缩,我无法承受任何性能问题。

1 个答案:

答案 0 :(得分:1)

Gzip不可拆分。你会发现有关这个问题的文章和问题,所以我不会详细介绍。

您的选择是:

  • 不要使用Gzip(不要压缩或使用其他可拆分压缩格式)
  • 使用hack使GZip可分割,如https://github.com/nielsbasjes/splittablegzip。每个映射器仍然必须从头开始读取文件,因此需要权衡利弊。阅读文档以了解更多信息。

这取决于你做了什么,但对于大多数处理来说,4GB的数据都不算什么。我会确保我的用例确实需要像Hadoop这样的大象。它可扩展但复杂,工作起来很痛苦,对于小型数据集来说通常很慢。