在凝视namenode时出现ExitCodeException

时间:2015-12-07 05:02:33

标签: hadoop

我在Solaris 10服务器上配置了hadoop。我在这台服务器上配置了Hadoop 2.7.1。现在,当我使用start-dfs.sh datanode启动hadoop守护进程时,secondaryNamenode正在启动,但Namenode没有启动。我检查了namenode日志,它给出了以下错误消息:

2015-12-08 16:24:47,703 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = psdrac2/192.168.106.109
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.8.0_66
************************************************************/
2015-12-08 16:24:47,798 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-12-08 16:24:47,832 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode
2015-12-08 16:24:50,310 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-12-08 16:24:50,977 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-12-08 16:24:50,978 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-12-08 16:24:50,998 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://psdrac2:9000
2015-12-08 16:24:51,005 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use psdrac2:9000 to access this namenode/service.
2015-12-08 16:24:51,510 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-12-08 16:24:52,680 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-12-08 16:24:53,177 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-12-08 16:24:53,239 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2015-12-08 16:24:53,289 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-12-08 16:24:53,336 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-08 16:24:53,354 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-12-08 16:24:53,355 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-12-08 16:24:53,356 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-12-08 16:24:53,544 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-12-08 16:24:53,556 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-12-08 16:24:53,673 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-12-08 16:24:53,674 INFO org.mortbay.log: jetty-6.1.26
2015-12-08 16:24:56,059 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-12-08 16:24:56,310 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,313 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,315 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-08 16:24:56,315 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-08 16:24:56,362 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,364 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,701 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-12-08 16:24:56,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-12-08 16:24:57,154 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-12-08 16:24:57,155 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-12-08 16:24:57,171 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-12-08 16:24:57,191 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Dec 08 16:24:57
2015-12-08 16:24:57,215 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-12-08 16:24:57,216 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:57,232 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-12-08 16:24:57,233 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-12-08 16:24:57,368 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-12-08 16:24:57,370 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2015-12-08 16:24:57,370 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-12-08 16:24:57,422 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2015-12-08 16:24:57,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-12-08 16:24:57,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-12-08 16:24:57,424 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-12-08 16:24:57,435 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-12-08 16:24:58,543 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-12-08 16:24:58,543 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,544 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-12-08 16:24:58,544 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-12-08 16:24:58,554 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-12-08 16:24:58,554 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-12-08 16:24:58,555 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2015-12-08 16:24:58,556 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-12-08 16:24:58,625 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-12-08 16:24:58,625 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,626 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-12-08 16:24:58,626 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-12-08 16:24:58,640 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-12-08 16:24:58,641 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-12-08 16:24:58,641 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-12-08 16:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-12-08 16:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-12-08 16:24:58,666 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-12-08 16:24:58,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-12-08 16:24:58,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-12-08 16:24:58,695 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-12-08 16:24:58,696 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,697 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-12-08 16:24:58,697 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-12-08 16:24:58,790 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/in_use.lock acquired by nodename 15020@psdrac2
2015-12-08 16:24:59,268 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current
2015-12-08 16:24:59,272 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2015-12-08 16:24:59,600 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-12-08 16:24:59,878 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-12-08 16:24:59,879 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/fsimage_0000000000000000000
2015-12-08 16:24:59,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-12-08 16:24:59,958 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2015-12-08 16:25:01,370 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-12-08 16:25:01,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 2645 msecs
2015-12-08 16:25:03,759 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to psdrac2:9000
2015-12-08 16:25:03,809 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-12-08 16:25:03,909 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2015-12-08 16:25:04,108 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-12-08 16:25:04,116 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:25:04,169 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-12-08 16:25:04,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1
2015-12-08 16:25:04,173 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 25 
2015-12-08 16:25:04,184 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/edits_inprogress_0000000000000000001 -> /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/edits_0000000000000000001-0000000000000000002
2015-12-08 16:25:04,202 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
2015-12-08 16:25:04,294 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-12-08 16:25:04,294 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2015-12-08 16:25:04,315 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-12-08 16:25:04,329 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-12-08 16:25:04,333 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-12-08 16:25:04,335 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-12-08 16:25:04,380 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.fs.DF.getFilesystem(DF.java:76)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1058)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:678)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:664)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)2015-12-05 16:46:08,229 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-12-05 16:46:08,239 INFOorg.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
2015-12-08 16:25:04,418 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at psdrac2/192.168.106.109

为什么我收到此错误?

3 个答案:

答案 0 :(得分:0)

您可以在solaris服务器上尝试'df -k -P'命令吗?如果这不起作用,您需要确保您的solaris服务器链接到默认的df命令为“/ usr / xpg4 / bin / df”,它支持-P命令。

答案 1 :(得分:0)

从mapred-site.xml中删除mapred.job.tracker属性,然后尝试启动服务

并在yarn-site.xml中添加以下参数

<property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8025</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>psdrac2:8030</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>psdrac2:8040</value> </property>

答案 2 :(得分:0)

<@> @ Rohan的回答是正确的。
我在Solaris 10上遇到了同样的问题 这是对根本原因的深入分析。

DF.java(Line:144)尝试运行命令。

return new String[] {"bash","-c","exec 'df' '-k' '-P' '" + dirPath + "' 2>/dev/null"};

因此solaris上的默认'df'二进制文件不带-P参数。 因此,您必须使用“/ usr / xpg4 / bin / df”才能使其正常工作。