我正在尝试在Macbook上设置hadoop 2.7。我正在遵循here中详细介绍的所有步骤。另外,请在下面找到我的配置文件。
.bashrc
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_261.jdk/Contents/Home
export SPARK_HOME=/Users/praveen/Documents/Dev/Apache-Spark/spark-3.0.0-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin
export HADOOP_HOME=/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_MAPRED_HOME=/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2
export HADOOP_COMMON_HOME=/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2
export HADOOP_HDFS_HOME=/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2
export YARN_HOME=/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2
export PATH=$PATH:/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2/bin
PDSH_RCMD_TYPE=ssh
export HADOOP_SSH_OPTS="-p "
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2/lib/native
alias hstart="/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2/sbin/start-all.sh"
alias hstop="/Users/praveen/Documents/Dev/hadoop/hadoop-2.7.2/sbin/stop-all.sh"
/ etc / hosts /
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
::1 localhost
127.0.0.1 localhost
255.255.255.255 broadcasthost
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/Cellar/hadoop/hdfs/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permission</name>
<value>false</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9010</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
我搜索了解决方案,并使用以下详细信息更新了hadoop-env.sh
,这些详细信息仅在stackoverlow的其他几篇文章中提出。
hadoop-env.sh
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_261.jdk/Contents/Home
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
export HADOOP_SSH_OPTS="-p"
此外,我也设置了ssh键。 ssh localhost
工作正常。 hdfs format
命令成功完成,并显示以下输出
hdfs namenode -format
20/08/14 23:23:54 INFO common.Storage: Storage directory /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name has been successfully formatted.
20/08/14 23:23:54 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/08/14 23:23:54 INFO util.ExitUtil: Exiting with status 0
20/08/14 23:23:54 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Praveen.local/192.168.29.199
************************************************************/
但是,在那之后,当我尝试启动dfs时,它每次都会失败,并显示以下消息。
sbin/start-dfs.sh
20/08/14 23:24:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: Bad port 'localhost'
localhost: Bad port 'localhost'
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Bad port '0.0.0.0'
20/08/14 23:24:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
我每次都会收到此错误-localhost: Bad port 'localhost'
和0.0.0.0: Bad port '0.0.0.0'
。我已经尝试了所有不同的解决方案,这在我可以获得的所有帖子中都有详细说明。我什至重新启动了系统几次,甚至清除了缓存的设置(如果有的话)。不幸的是,我无法解决此问题。
任何帮助解决此问题并启动namenode
和datanode
的帮助/建议都将非常有帮助。