我在笔记本电脑的单人模式中设置了一个hadoop。 info:Ubuntu 12.10,jdk 1.7 oracle,从.deb文件安装hadoop。 地点: 在/ etc / Hadoop的 的/ usr /共享/ hadoop的
我在/usr/share/hadoop/templates/conf/core-site.xml中配置我添加了2个属性
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
在hdfs-site.xml
中<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
在mapred-site.xml中
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
当我从命令开始 hduser @ sepdau:〜$ start-all.sh
starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out
localhost: starting datanode, logging to /var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost: starting secondarynamenode, logging to /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
starting jobtracker, logging to /var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
localhost: starting tasktracker, logging to /var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out
但是当我按照jps查看流程时
hduser@sepdau:~$ jps
13725 Jps
更多
root@sepdau:/home/sepdau# netstat -plten | grep java
tcp6 0 0 :::8080 :::* LISTEN 117 9953 1316/java
tcp6 0 0 :::53976 :::* LISTEN 117 16755 1316/java
tcp6 0 0 127.0.0.1:8700 :::* LISTEN 1000 786271 8323/java
tcp6 0 0 :::59012 :::* LISTEN 117 16756 1316/java
当我停止--all.sh
hduser@sepdau:~$ stop-all.sh
no jobtracker to stop
localhost: no tasktracker to stop
no namenode to stop
localhost: no datanode to stop
localhost: no secondarynamenode to stop
在我的主机文件中
hduser@sepdau:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 sepdau.com
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
file slave:localhost master:localhost
这是一些日志
hduser@sepdau:/home/sepdau$ start-all.sh
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out
/usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-namenode.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting datanode, logging to /var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-datanode.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting secondarynamenode, logging to /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-secondarynamenode.pid: No such file or directory
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting jobtracker, logging to /var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
/usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-jobtracker.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting tasktracker, logging to /var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-tasktracker.pid: No such file or directory
我使用root用户,但它有同样的问题
我在这里错了。如何使用hadoop插件连接到eclipse。 感谢提前
答案 0 :(得分:2)
尝试添加
<property>
<name>dfs.name.dir</name>
<value>/home/abhinav/hdfs</value>
</property>
到hdfs-site.xml并确保它存在
我为此写了一个小教程。看看这有助于http://blog.abhinavmathur.net/2013/01/experience-with-setting-multinode.html
答案 1 :(得分:0)
您可以通过编辑文件hadoop-env.sh来添加pid和创建日志的路径。该文件存储在conf文件夹中。
export HADOOP_LOG_DIR=/home/username/hadoop-1x/logs
export HADOOP_PID_DIR=/home/username/pids
答案 2 :(得分:0)
修改你的hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/home/user_to_run_hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/user_to_run_hadoop/hdfs/data</value>
</property>
确保在hdfs
创建目录/home/user_to_run_hadoop
。然后在name
data
和hdfs
之后,您需要chmod -R 755 ./hdfs/
和path_to_hadoop_home/bin/hadoop namenode -format
答案 3 :(得分:0)
重新启动终端,首先格式化NameNode。
有些罕见的情况有人更改了Hadoop中Bin文件夹中的Start-all.sh文件。检查一次。
检查一次bashrc文件配置是否合适?