为什么我不能通过jps命令查看namenode,但可以访问localhost:50070

时间:2018-06-08 13:40:23

标签: hadoop namenode

我在ubuntu上安装了hadoop-2.7.6。我试图以伪分布式模式运行它。

sudo ./bin/hdfs namenode -format
sudo ./sbin/start-dfs.sh
jps

只有:

16308 Jps

我可以访问localhost:50070,但我无法访问localhost:8088

enter image description here

enter image description here

namenode日志如下,我没有看到任何异常。

2018-06-08 17:52:17,649 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2018-06-08 17:52:17,655 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2018-06-08 17:52:17,962 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-06-08 17:52:18,052 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2018-06-08 17:52:18,052 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2018-06-08 17:52:18,061 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2018-06-08 17:52:18,062 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2018-06-08 17:52:18,284 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2018-06-08 17:52:18,340 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2018-06-08 17:52:18,348 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2018-06-08 17:52:18,362 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2018-06-08 17:52:18,367 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-06-08 17:52:18,369 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2018-06-08 17:52:18,369 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2018-06-08 17:52:18,369 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2018-06-08 17:52:18,486 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2018-06-08 17:52:18,487 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2018-06-08 17:52:18,503 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2018-06-08 17:52:18,503 INFO org.mortbay.log: jetty-6.1.26
2018-06-08 17:52:18,676 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2018-06-08 17:52:18,716 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2018-06-08 17:52:18,716 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2018-06-08 17:52:18,763 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2018-06-08 17:52:18,764 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2018-06-08 17:52:18,767 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2018-06-08 17:52:18,798 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2018-06-08 17:52:18,798 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2018-06-08 17:52:18,799 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2018-06-08 17:52:18,800 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2018 六月 08 17:52:18
2018-06-08 17:52:18,801 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2018-06-08 17:52:18,801 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2018-06-08 17:52:18,803 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2018-06-08 17:52:18,803 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2018-06-08 17:52:18,809 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2018-06-08 17:52:18,809 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2018-06-08 17:52:18,809 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2018-06-08 17:52:18,809 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2018-06-08 17:52:18,809 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2018-06-08 17:52:18,809 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2018-06-08 17:52:18,809 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2018-06-08 17:52:18,809 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2018-06-08 17:52:18,815 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
2018-06-08 17:52:18,815 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2018-06-08 17:52:18,815 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2018-06-08 17:52:18,815 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2018-06-08 17:52:18,817 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2018-06-08 17:52:19,090 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2018-06-08 17:52:19,090 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2018-06-08 17:52:19,090 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2018-06-08 17:52:19,090 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2018-06-08 17:52:19,091 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2018-06-08 17:52:19,091 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2018-06-08 17:52:19,091 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2018-06-08 17:52:19,091 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2018-06-08 17:52:19,097 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2018-06-08 17:52:19,097 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2018-06-08 17:52:19,097 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2018-06-08 17:52:19,098 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2018-06-08 17:52:19,099 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2018-06-08 17:52:19,099 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2018-06-08 17:52:19,100 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2018-06-08 17:52:19,103 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2018-06-08 17:52:19,103 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2018-06-08 17:52:19,103 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2018-06-08 17:52:19,106 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2018-06-08 17:52:19,106 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2018-06-08 17:52:19,108 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2018-06-08 17:52:19,108 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2018-06-08 17:52:19,108 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2018-06-08 17:52:19,108 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2018-06-08 17:52:19,164 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/in_use.lock acquired by nodename 14843@chenxx--K29
2018-06-08 17:52:19,242 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current
2018-06-08 17:52:19,243 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2018-06-08 17:52:19,243 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2018-06-08 17:52:19,274 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2018-06-08 17:52:19,296 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2018-06-08 17:52:19,296 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/fsimage_0000000000000000000
2018-06-08 17:52:19,300 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Initializing quota with 4 thread(s)
2018-06-08 17:52:19,307 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Quota initialization completed in 7 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
2018-06-08 17:52:19,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2018-06-08 17:52:19,308 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2018-06-08 17:52:19,510 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2018-06-08 17:52:19,510 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 399 msecs
2018-06-08 17:52:19,702 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to localhost:9000
2018-06-08 17:52:19,707 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000
2018-06-08 17:52:19,717 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2018-06-08 17:52:19,782 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2018-06-08 17:52:19,789 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2018-06-08 17:52:19,790 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2018-06-08 17:52:19,790 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2018-06-08 17:52:19,790 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 1 secs
2018-06-08 17:52:19,790 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2018-06-08 17:52:19,790 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2018-06-08 17:52:19,799 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2018-06-08 17:52:19,804 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks            = 0
2018-06-08 17:52:19,804 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks          = 0
2018-06-08 17:52:19,804 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2018-06-08 17:52:19,805 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of  over-replicated blocks = 0
2018-06-08 17:52:19,805 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written    = 0
2018-06-08 17:52:19,805 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 14 msec
2018-06-08 17:52:19,827 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2018-06-08 17:52:19,827 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-06-08 17:52:19,830 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000
2018-06-08 17:52:19,830 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2018-06-08 17:52:19,834 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2018-06-08 17:52:29,040 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:50010, datanodeUuid=092a2f4d-c0ab-42e5-9cbc-f6647e6154c0, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-3498eed9-a115-4d27-8560-11244a5ce8ea;nsid=1817268992;c=0) storage 092a2f4d-c0ab-42e5-9cbc-f6647e6154c0
2018-06-08 17:52:29,040 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2018-06-08 17:52:29,041 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2018-06-08 17:52:29,111 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2018-06-08 17:52:29,111 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-4694dbe8-3afa-4aa5-b9c9-764bb045894a for DN 127.0.0.1:50010
2018-06-08 17:52:29,166 INFO BlockStateChange: BLOCK* processReport 0xed805a1a8c2: from storage DS-4694dbe8-3afa-4aa5-b9c9-764bb045894a node DatanodeRegistration(127.0.0.1:50010, datanodeUuid=092a2f4d-c0ab-42e5-9cbc-f6647e6154c0, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-3498eed9-a115-4d27-8560-11244a5ce8ea;nsid=1817268992;c=0), blocks: 0, hasStaleStorage: false, processing time: 2 msecs
2018-06-08 17:53:40,542 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2018-06-08 17:53:40,542 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2018-06-08 17:53:40,542 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1
2018-06-08 17:53:40,543 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 59 
2018-06-08 17:53:40,562 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 77 
2018-06-08 17:53:40,564 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/edits_inprogress_0000000000000000001 -> /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/edits_0000000000000000001-0000000000000000002
2018-06-08 17:53:40,569 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2018-06-08 17:53:41,400 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.04s at 0.00 KB/s
2018-06-08 17:53:41,400 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000002 size 321 bytes.
2018-06-08 17:53:41,437 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0
2018-06-08 18:22:54,144 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 11 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 70 
2018-06-08 18:24:01,425 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 12 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 87 
2018-06-08 18:24:01,537 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-4694dbe8-3afa-4aa5-b9c9-764bb045894a:NORMAL:127.0.0.1:50010|RBW]]} for /test/words._COPYING_
2018-06-08 18:24:01,832 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-4694dbe8-3afa-4aa5-b9c9-764bb045894a:NORMAL:127.0.0.1:50010|RBW]]} is not COMPLETE (ucState = COMMITTED, replication# = 0 <  minimum = 1) in file /test/words._COPYING_
2018-06-08 18:24:01,848 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-4694dbe8-3afa-4aa5-b9c9-764bb045894a:NORMAL:127.0.0.1:50010|RBW]]} size 31
2018-06-08 18:24:02,254 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /test/words._COPYING_ is closed by DFSClient_NONMAPREDUCE_457462521_1
2018-06-08 18:25:41,194 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 11 Total time for transactions(ms): 15 Number of transactions batched in Syncs: 1 Number of syncs: 7 SyncTimes(ms): 136 
2018-06-08 18:25:41,716 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-4694dbe8-3afa-4aa5-b9c9-764bb045894a:NORMAL:127.0.0.1:50010|RBW]]} for /test/out/_temporary/0/_temporary/attempt_local2062539_0001_r_000000_0/part-r-00000
2018-06-08 18:25:41,816 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-4694dbe8-3afa-4aa5-b9c9-764bb045894a:NORMAL:127.0.0.1:50010|RBW]]} size 0
2018-06-08 18:25:41,823 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /test/out/_temporary/0/_temporary/attempt_local2062539_0001_r_000000_0/part-r-00000 is closed by DFSClient_NONMAPREDUCE_-667008448_1
2018-06-08 18:25:41,912 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /test/out/_SUCCESS is closed by DFSClient_NONMAPREDUCE_-667008448_1
2018-06-08 18:53:41,841 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2018-06-08 18:53:41,841 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2018-06-08 18:53:41,841 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3
2018-06-08 18:53:41,842 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 24 Total time for transactions(ms): 16 Number of transactions batched in Syncs: 1 Number of syncs: 16 SyncTimes(ms): 199 
2018-06-08 18:53:41,857 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 24 Total time for transactions(ms): 16 Number of transactions batched in Syncs: 1 Number of syncs: 17 SyncTimes(ms): 215 
2018-06-08 18:53:41,858 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/edits_inprogress_0000000000000000003 -> /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/edits_0000000000000000003-0000000000000000026
2018-06-08 18:53:41,859 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 27
2018-06-08 18:53:42,157 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.03s at 0.00 KB/s
2018-06-08 18:53:42,158 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000026 size 655 bytes.
2018-06-08 18:53:42,203 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 2
2018-06-08 18:53:42,203 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2018-06-08 19:53:42,547 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2018-06-08 19:53:42,548 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2018-06-08 19:53:42,548 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 27
2018-06-08 19:53:42,548 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 77 
2018-06-08 19:53:42,555 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 83 
2018-06-08 19:53:42,556 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/edits_inprogress_0000000000000000027 -> /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/edits_0000000000000000027-0000000000000000028
2018-06-08 19:53:42,556 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 29
2018-06-08 19:53:42,811 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.03s at 0.00 KB/s
2018-06-08 19:53:42,812 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000028 size 655 bytes.
2018-06-08 19:53:42,857 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 26
2018-06-08 19:53:42,857 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/fsimage_0000000000000000002, cpktTxId=0000000000000000002)
2018-06-08 20:53:43,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2018-06-08 20:53:43,180 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2018-06-08 20:53:43,180 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 29
2018-06-08 20:53:43,180 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 64 
2018-06-08 20:53:43,197 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 80 
2018-06-08 20:53:43,198 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/edits_inprogress_0000000000000000029 -> /usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/edits_0000000000000000029-0000000000000000030
2018-06-08 20:53:43,198 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 31
2018-06-08 20:53:43,439 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.04s at 0.00 KB/s
2018-06-08 20:53:43,440 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000030 size 655 bytes.
2018-06-08 20:53:43,474 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 28
2018-06-08 20:53:43,474 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/usr/hadoop/hadoop-2.7.6/tmp/dfs/name/current/fsimage_0000000000000000026, cpktTxId=0000000000000000026)

我运行以下命令。 Namenode似乎已经开始了。

sudo ./sbin/start-dfs.sh 
Starting namenodes on [localhost]
localhost: namenode running as process 14843. Stop it first.

namenode成功启动了吗?如果有任何建议,我将不胜感激。

0 个答案:

没有答案
相关问题