“hadoop dfs -cat输出”不返回任何内容

时间:2016-06-29 11:39:07

标签: java hadoop hdfs

我在网站http://fiware-cosmos.readthedocs.io/en/latest/user_and_programmer_manual/batch/using_hadoop_and_ecosystem/#top中实现代码,其中包括将参数“regex”添加到MapReduce程序的命令行中。程序运行良好,显示

16/06/28 17:19:47 INFO input.FileInputFormat: Total input paths to process : 1
16/06/28 17:19:47 INFO mapreduce.JobSubmitter: number of splits:1
16/06/28 17:19:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1448020964278_0633
16/06/28 17:19:49 INFO impl.YarnClientImpl: Submitted application application_1448020964278_0633
16/06/28 17:19:49 INFO mapreduce.Job: The url to track the job: http://co2-hdpmaster.irit.fr:8088/proxy/application_1448020964278_0633/
16/06/28 17:19:49 INFO mapreduce.Job: Running job: job_1448020964278_0633
16/06/28 17:19:59 INFO mapreduce.Job: Job job_1448020964278_0633 running in uber mode : false
16/06/28 17:19:59 INFO mapreduce.Job:  map 0% reduce 0%
16/06/28 17:20:10 INFO mapreduce.Job:  map 100% reduce 0%
16/06/28 17:20:19 INFO mapreduce.Job:  map 100% reduce 100%
16/06/28 17:20:20 INFO mapreduce.Job: Job job_1448020964278_0633 completed successfully
16/06/28 17:20:20 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=6
                FILE: Number of bytes written=230845
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=916
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=1
                Launched reduce tasks=1
                Rack-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=7478
                Total time spent by all reduces in occupied slots (ms)=7151
                Total time spent by all map tasks (ms)=7478
                Total time spent by all reduce tasks (ms)=7151
                Total vcore-seconds taken by all map tasks=7478
                Total vcore-seconds taken by all reduce tasks=7151
                Total megabyte-seconds taken by all map tasks=22972416
                Total megabyte-seconds taken by all reduce tasks=21967872
        Map-Reduce Framework
                Map input records=17
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=125
                Combine input records=0
                Combine output records=0
                Reduce input groups=0
                Reduce shuffle bytes=6
                Reduce input records=0
                Reduce output records=0
                Spilled Records=0
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=114
                CPU time spent (ms)=2120
                Physical memory (bytes) snapshot=1398767616
                Virtual memory (bytes) snapshot=6716833792
                Total committed heap usage (bytes)=2156396544
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=791
        File Output Format Counters
                Bytes Written=0

当我想用命令hadoop dfs -cat output/part-r-00000显示文件的内容时,它什么都不返回 有人可以解释这个问题

1 个答案:

答案 0 :(得分:0)

你的工作没有产生任何结果:

Map output records=0
Reduce output records=0
HDFS: Number of bytes written=0

所以文件可能是空的。您应该在HDFS上检查它们的大小以确认这一点。