start-dfs.sh -not working - localhost:Bad port' localhost' (Hadoop 2.7.2)

时间:2017-09-09 12:24:16

标签: hadoop2

当我尝试命令hadoop版本 - 它的工作正常。 hadoop namenode -format命令也正常工作 命令start-dfs.sh - 不工作 我正进入(状态 在[localhost]上启动名称节点 localhost:Bad port' localhost' localhost:Bad port' localhost' 启动辅助名称节点[0.0.0.0]

请查看以下配置文件,谢谢。

芯SITE.XML

 <configuration>
   <property>
     <name>fs.default.name</name>
     <value>hdfs://localhost:9000</value>
  </property>
 </configuration>

HDFS-site.xml中

 <configuration>
   <property> 
     <name>dfs.replication</name> 
     <value>1</value> 
   </property>
   <property> 
     <name>dfs.permission</name> 
     <value>false</value> 
   </property>
   <property> 
     <name>dfs.namenode.name.dir</name> 
     <value>/home/.../hadoop-2.7.2/hadoop2_data/hdfs/namenode</value> 
   </property>
   <property> 
     <name>dfs.datanode.data.dir</name> 
     <value>/home/.../hadoop-2.7.2/hadoop2_data/hdfs/datanode</value> 
   </property>
 </configuration>

纱-site.xml中

 <configuration>
   <property>
     <name>yarn.nodemanager.aux-services</name>
     <value>mapreduce_shuffle</value>
   </property>
   <property>
     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>
</configuration>

hadoop-env.sh

 export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

mapred-site.xml中

 <configuration>
 <property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
 </property>
 </configuration>

主机

127.0.0.1   localhost
127.0.1.1   arun

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

bashrc中

#adding this

export HADOOP_HOME=/home/arun/my_work/hadoop/hadoop-2.7.2
export HADOOP_CONF_DIR=/home/arun/my_work/hadoop/hadoop-2.7.2/etc/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME 
export HADOOP_COMMON_HOME=$HADOOP_HOME 
export HADOOP_HDFS_HOME=$HADOOP_HOME 
export YARN_HOME=$HADOOP_HOME 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"


export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export PATH="$PATH:/usr/lib/jvm/java-7-openjdk-amd64/bin"


export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 
export HADOOP_PID_DIR=/home/.../hadoop-2.7.2/hadoop2_data/hdfs/pid

运行命令./sbin/start-dfs.sh时会出现以下错误。

Starting namenodes on [localhost]
localhost: Bad port 'localhost'
localhost: Bad port 'localhost'
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Bad port '0.0.0.0'

3 个答案:

答案 0 :(得分:0)

我面对并按照以下步骤解决了同样的问题,

1)ssh localhost应该得到响应。如果没有,请安装ssh并关闭所有终端,重新启动ssh然后从/ etc / sbin执行start-dfs.sh
2)在hadoop-env.sh中检查HADOOP_OPTS = -Djava.net.preferIPv4Stack = true

答案 1 :(得分:0)

有时hadoop会将旧配置文件保存在缓存中,我多次遇到过这类问题。即使您的配置文件对我来说很好,但我怀疑您最初可能错误地将“localhost”作为端口号并尝试启动hdfs。稍后您确实修复了配置并尝试重新启动hdfs,但您的旧配置仍在缓存中。重启服务器应该有帮助。

答案 2 :(得分:0)

尝试检查hadoop-env.sh和其他位置的HADOOP_SSH_OPTS。我有同样的问题,原因是我在HADOOP_SSH_OPTS中有一些未完成的参数,如

export HADOOP_SSH_OPTS="-p"