如何启动Datanode? (找不到start-dfs.sh脚本)

时间:2015-10-23 01:51:20

标签: hadoop hortonworks-data-platform

我们正在无头系统上设置自动部署:因此使用GUI不是一种选择。

Hortonworks数据平台中hdfs的start-dfs.sh脚本在哪里? CDH / cloudera将这些文件打包在hadoop / sbin目录下。但是,当我们在HDP下搜索这些脚本时,找不到它们:

$ pwd
/usr/hdp/current

HDP中存在哪些脚本?

[stack@s1-639016 current]$ find -L . -name \*.sh
./hadoop-hdfs-client/sbin/refresh-namenodes.sh
./hadoop-hdfs-client/sbin/distribute-exclude.sh
./hadoop-hdfs-datanode/sbin/refresh-namenodes.sh
./hadoop-hdfs-datanode/sbin/distribute-exclude.sh
./hadoop-hdfs-nfs3/sbin/refresh-namenodes.sh
./hadoop-hdfs-nfs3/sbin/distribute-exclude.sh
./hadoop-hdfs-secondarynamenode/sbin/refresh-namenodes.sh
./hadoop-hdfs-secondarynamenode/sbin/distribute-exclude.sh
./hadoop-hdfs-namenode/sbin/refresh-namenodes.sh
./hadoop-hdfs-namenode/sbin/distribute-exclude.sh
./hadoop-hdfs-journalnode/sbin/refresh-namenodes.sh
./hadoop-hdfs-journalnode/sbin/distribute-exclude.sh
./hadoop-hdfs-portmap/sbin/refresh-namenodes.sh
./hadoop-hdfs-portmap/sbin/distribute-exclude.sh
./hadoop-client/sbin/hadoop-daemon.sh
./hadoop-client/sbin/slaves.sh
./hadoop-client/sbin/hadoop-daemons.sh
./hadoop-client/etc/hadoop/hadoop-env.sh
./hadoop-client/etc/hadoop/kms-env.sh
./hadoop-client/etc/hadoop/mapred-env.sh
./hadoop-client/conf/hadoop-env.sh
./hadoop-client/conf/kms-env.sh
./hadoop-client/conf/mapred-env.sh
./hadoop-client/libexec/kms-config.sh
./hadoop-client/libexec/init-hdfs.sh
./hadoop-client/libexec/hadoop-layout.sh
./hadoop-client/libexec/hadoop-config.sh
./hadoop-client/libexec/hdfs-config.sh
./zookeeper-client/conf/zookeeper-env.sh
./zookeeper-client/bin/zkCli.sh
./zookeeper-client/bin/zkCleanup.sh
./zookeeper-client/bin/zkServer-initialize.sh
./zookeeper-client/bin/zkEnv.sh
./zookeeper-client/bin/zkServer.sh

注意:ZERO启动/停止sh脚本..

特别是我对启动namenode(s),journalnode和datanodes的 start-dfs.sh 脚本感兴趣。

2 个答案:

答案 0 :(得分:1)

如何启动DataNode

su - hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode";

Github - Hortonworks Start Scripts

<强>更新

决定自己去寻找它。

  1. 使用Ambari旋转单个节点,安装HDP 2.2(a),HDP 2.3(b)
  2. sudo find / -name \*.sh | grep start
  3. 实测值

    (a)/usr/hdp/2.2.8.0-3150/hadoop/src/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/s‌​tart-dfs.sh

    奇怪的是它不存在于/usr/hdp/current中,应该进行符号链接。

    (b)/hadoop/yarn/local/filecache/10/mapreduce.tar.gz/hadoop/sbin/start-dfs.sh

答案 1 :(得分:0)

管理hadoop群集的推荐方法是通过管理员面板。由于您正在使用Hotronworks发行版,因此使用Ambari更有意义。