Hadoop namenode无法在OSX上运行(ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:java.net.SocketException:Permission denied)

时间:2013-11-26 22:53:43

标签: hadoop

我在OSX上运行Hadoop 1.2.1(单节点集群模式),除了namenode之外,一切似乎都在工作:当我运行start-all.sh时,namenode无法运行。运行stop-all.sh时可以看到这一点:

$ bin/stop-all.sh 
stopping jobtracker
localhost: stopping tasktracker
no namenode to stop
localhost: stopping datanode
localhost: stopping secondarynamenode

我一直在对此进行故障排除已有一段时间了,我似乎无法弄清楚问题 - 我不知道是什么原因造成了此权限错误。我已经重新格式化了namenode并在/ hadoopstorage目录上运行了chmod -R 777(正如您在conf文件中看到的那样,这是namenode文件所在的位置),因此hadoop应该能够修改它。

这是namenode日志文件:

2013-11-26 16:51:25,951 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = <my machine>
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.6.0_65
************************************************************/
2013-11-26 16:51:26,187 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-11-26 16:51:26,203 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-11-26 16:51:26,204 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-11-26 16:51:26,204 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-11-26 16:51:26,569 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-11-26 16:51:26,585 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-11-26 16:51:26,623 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-11-26 16:51:26,625 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-11-26 16:51:26,711 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-11-26 16:51:26,711 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
2013-11-26 16:51:26,712 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1039859712
2013-11-26 16:51:26,712 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries
2013-11-26 16:51:26,712 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-11-26 16:51:26,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=williammurphy
2013-11-26 16:51:26,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-11-26 16:51:26,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-11-26 16:51:26,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-11-26 16:51:26,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-11-26 16:51:26,997 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-11-26 16:51:27,070 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-11-26 16:51:27,070 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2013-11-26 16:51:27,092 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /hadoopstorage/name/current/fsimage
2013-11-26 16:51:27,092 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-11-26 16:51:27,099 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-11-26 16:51:27,099 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /hadoopstorage/name/current/fsimage of size 119 bytes loaded in 0 seconds.
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start loading edits file /hadoopstorage/name/current/edits
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: EOF of /hadoopstorage/name/current/edits, reached end of edit log Number of transactions found: 0.  Bytes read: 4
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start checking end of edit log (/hadoopstorage/name/current/edits) ...
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Checked the bytes after the end of edit log (/hadoopstorage/name/current/edits):
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Padding position  = -1 (-1 means padding not found)
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Edit log length   = 4
2013-11-26 16:51:27,100 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Read length       = 4
2013-11-26 16:51:27,101 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Corruption length = 0
2013-11-26 16:51:27,101 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Toleration length = 0 (= dfs.namenode.edits.toleration.length)
2013-11-26 16:51:27,104 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary: |---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --|
2013-11-26 16:51:27,104 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edits file /hadoopstorage/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-11-26 16:51:27,106 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /hadoopstorage/name/current/fsimage of size 119 bytes saved in 0 seconds.
2013-11-26 16:51:27,187 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/hadoopstorage/name/current/edits
2013-11-26 16:51:27,189 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/hadoopstorage/name/current/edits
2013-11-26 16:51:27,244 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-11-26 16:51:27,244 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 520 msecs
2013-11-26 16:51:27,246 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct          = 0.9990000128746033
2013-11-26 16:51:27,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-11-26 16:51:27,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension              = 30000
2013-11-26 16:51:27,251 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 total blocks: 0 and thus the safe blocks: 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 17 msec
2013-11-26 16:51:27,268 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2013-11-26 16:51:27,269 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-11-26 16:51:27,269 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-11-26 16:51:27,283 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2013-11-26 16:51:27,283 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2013-11-26 16:51:27,284 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-11-26 16:51:27,284 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-11-26 16:51:27,284 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-11-26 16:51:27,291 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-11-26 16:51:27,311 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
2013-11-26 16:51:27,312 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2013-11-26 16:51:27,312 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:695)
2013-11-26 16:51:27,312 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/hadoopstorage/name/current/edits
2013-11-26 16:51:27,314 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/hadoopstorage/name/current/edits
2013-11-26 16:51:27,321 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:265)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:341)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1539)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:569)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:530)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:324)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

芯-site.xml中

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:23</value>
</property>

</configuration>

HDFS-site.xml中

<configuration>
<property>
  <name>dfs.name.dir</name>
  <value>/hadoopstorage/name/</value>
</property>
</configuration>    

mapred-site.xml中

<configuration>
<property> 
<name>mapred.job.tracker</name> 
<value>localhost:22</value> 
</property>
</configuration>

如果有人遇到过类似的错误,或者可以对情况有所了解,那将非常感激。提前致谢!

1 个答案:

答案 0 :(得分:1)

看起来您正试图在privileged ports(&lt; 1024)上启动Hadoop服务。特别是你试图在端口22上启动Job Tracker,这是SSH的well-known port。您应该避免绑定到其他应用程序的已知端口。

您可以通过以root身份运行start-all.sh来检查是否是这种情况。它修复它你可以继续以root身份运行(通常是一个坏主意),或重新配置以使用更高编号的端口:

芯-site.xml中

<configuration>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://localhost:9000</value>
     </property>
</configuration>

mapred-site.xml中

<configuration>
     <property>
         <name>mapred.job.tracker</name>
         <value>localhost:9001</value>
     </property>
</configuration>