hadoop:如何显示put命令的执行时间?或者如何显示在hdfs中加载文件的持续时间?

时间:2016-04-04 13:58:49

标签: hadoop hdfs

如何在hadoop中配置gravity="center"命令,以便显示执行时间?

因为这个命令:

[FEHLER] ault_handling.c:78: App fault! {1030da8e-9563-4db9-a08f-b8d6d274e8dd} PC: 0x805ea85 LR: ???

刚刚回来了:

put

该命令有效,但不显示任何执行时间。你知道命令是否有可能显示执行时间?或者有其他方法可以获得该信息?

1 个答案:

答案 0 :(得分:1)

根据我的不足,hadoop fs命令不提供任何调试信息,如执行时间,但你可以通过两种方式获得执行时间:

  1. Bash方式: start=$(date +'%s') && hadoop fs -put visit-sequences.csv /user/hadoop/temp && echo "It took $(($(date +'%s') - $start)) seconds"

  2. 来自日志文件:您可以查看namenode日志文件,其中列出了与已执行命令相关的所有详细信息,例如花费的时间,文件大小,复制等。

  3. e.g。我尝试了这个命令hadoop fs -put visit-sequences.csv /user/hadoop/temp并在日志文件中获得了特定于put操作的日志。

    2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
    2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
    2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 38
    2016-04-04 20:30:00,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 75 
    2016-04-04 20:30:00,118 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 95 
    2016-04-04 20:30:00,120 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /data/misc/hadoop/store/hdfs/namenode/current/edits_inprogress_0000000000000000038 -> /data/misc/hadoop/store/hdfs/namenode/current/edits_0000000000000000038-0000000000000000039
    2016-04-04 20:30:00,120 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 40
    2016-04-04 20:30:01,781 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.06s at 15.63 KB/s
    2016-04-04 20:30:01,781 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000039 size 1177 bytes.
    2016-04-04 20:30:01,830 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0
    2016-04-04 20:30:56,252 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} for /user/hadoop/temp/visit-sequences.csv._COPYING_
    2016-04-04 20:30:56,532 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} is not COMPLETE (ucState = COMMITTED, replication# = 0 <  minimum = 1) in file /user/hadoop/temp/visit-sequences.csv._COPYING_
    2016-04-04 20:30:56,533 INFO org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream: Nothing to flush
    2016-04-04 20:30:56,548 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_1073741829_1005{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-1b928386-65b9-4438-a781-b154cdb9a579:NORMAL:127.0.0.1:50010|RBW]]} size 742875
    2016-04-04 20:30:56,957 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/hadoop/temp/visit-sequences.csv._COPYING_ is closed by DFSClient_NONMAPREDUCE_1242172231_1