我正在尝试在运行Mountain Lion的Mac上以伪分布式模式设置Hadoop。我下载了Hadoop 1.0.4,并采取了以下步骤,详见Chuck Lam的“Hadoop in Action”:
1)生成SSH密钥对:我运行ssh-keygen -t rsa
生成一对,但没有设置密码。我把密钥放在/Users/me/.ssh/id_rsa.pub中。然后我将此文件复制到〜/ .ssh / authorized_keys。这允许我从我自己的机器SSH到我自己的机器而不提供密码。
2)设置JAVA_HOME:我修改了conf / hadoop-env.sh以包含export JAVA_HOME=/Library/Java/Home
,我认为这是我的Java安装目录。 (作为参考,此目录包含bin,bundle,lib和man。)
3)设置站点conf文件:我复制粘贴书中建议的配置。他们是: 核心的site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation.
</description>
</property>
</configuration>
mapred-site.xml中
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
<description>The host and port that the MapReduce job tracker runs
at.</description>
</property>
</configuration>
HDFS-site.xml中
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>The actual number of replications can be specified when the
file is created.</description>
</property>
</configuration>
4)设置主设备和从设备。我的conf / masters和conf / slaves文件只包含localhost。
5)格式化HDFS:bin/hadoop namenode -format
我得到以下输出:
12/11/16 13:20:12 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = dhcp-18-111-53-8.dyn.mit.edu/18.111.53.8
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
Re-format filesystem in /tmp/hadoop-me/dfs/name ? (Y or N) Y
12/11/16 13:20:17 INFO util.GSet: VM type = 64-bit
12/11/16 13:20:17 INFO util.GSet: 2% max memory = 39.83375 MB
12/11/16 13:20:17 INFO util.GSet: capacity = 2^22 = 4194304 entries
12/11/16 13:20:17 INFO util.GSet: recommended=4194304, actual=4194304
12/11/16 13:20:17 INFO namenode.FSNamesystem: fsOwner=me
12/11/16 13:20:18 INFO namenode.FSNamesystem: supergroup=supergroup
12/11/16 13:20:18 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/11/16 13:20:18 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/11/16 13:20:18 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/11/16 13:20:18 INFO namenode.NameNode: Caching file names occuring more than 10 times
12/11/16 13:20:18 INFO common.Storage: Image file of size 119 saved in 0 seconds.
12/11/16 13:20:18 INFO common.Storage: Storage directory /tmp/hadoop-me/dfs/name has been successfully formatted.
12/11/16 13:20:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dhcp-18-111-53-8.dyn.mit.edu/18.111.53.8
************************************************************/
6)启动:bin/start-all.sh
我得到以下输出:
starting namenode, logging to /Users/me/hadoop-1.0.4/libexec/../logs/hadoop-me-namenode-dhcp-18-111-53-8.dyn.mit.edu.out
localhost: starting datanode, logging to /Users/me/hadoop-1.0.4/libexec/../logs/hadoop-me-datanode-dhcp-18-111-53-8.dyn.mit.edu.out
localhost: starting secondarynamenode, logging to /Users/me/hadoop-1.0.4/libexec/../logs/hadoop-me-secondarynamenode-dhcp-18-111-53-8.dyn.mit.edu.out
starting jobtracker, logging to /Users/me/hadoop-1.0.4/libexec/../logs/hadoop-me-jobtracker-dhcp-18-111-53-8.dyn.mit.edu.out
localhost: starting tasktracker, logging to /Users/me/hadoop-1.0.4/libexec/../logs/hadoop-me-tasktracker-dhcp-18-111-53-8.dyn.mit.edu.out
现在的文字声称我应该能够运行jps并获得类似于:
的输出26893 Jps
26832 TaskTracker
26620 SecondaryNameNode
26333 NameNode
26484 DataNode
26703 JobTracker
然而,我只得到:
71311 Jps
所以我认为有些事情是错的,但不知道我哪里出错了。有什么建议?感谢。
答案 0 :(得分:0)
Hadoop日志会告诉你原因。
检查/Users/me/hadoop-1.0.4/libexec/../logs/
我认为问题可能是.ssh
目录的权限不正确。
尝试chmod 700 ~/.ssh; chmod 600 ~/.ssh/id_rsa