当在hadoop上配置高效运行时,DfsBroker无法启动错误

时间:2014-04-19 08:53:00

标签: hadoop hypertable

我只是尝试在hadoop上安装超级按照官方文档 首先,我在CentOS 6.5-32bit节点上以persudo-distribute模式部署cdh4

然后按照超强的官方文档在hadoop上安装超文本

当我跑

cap start -f Capfile.cluster

得到DfsBroker没出现错误

 * executing `start'
 ** transaction: start
  * executing `start_servers'
  * executing `start_hyperspace'
  * executing "/opt/hypertable/current/bin/start-hyperspace.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg"
    servers: ["master"]
    [master] executing command
 ** [out :: master] Started Hyperspace
    command finished in 6543ms
  * executing `start_master'
  * executing "/opt/hypertable/current/bin/start-dfsbroker.sh hadoop --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-monitoring.sh"
    servers: ["master"]
    [master] executing command
 ** [out :: master] DFS broker: available file descriptors: 65536
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] ERROR: DFS Broker (hadoop) did not come up
    command finished in 129114ms
failed: "sh -c '/opt/hypertable/current/bin/start-dfsbroker.sh hadoop --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-monitoring.sh'" on master

我检查/opt/hypertable/0.9.7.16中的DfsBroker.hadoop.log 得到这个

/opt/hypertable/current/bin/jrun: line 113: exec: java: not found

但是我已经设置了JAVA_HOME并且我使用

测试java正常运行
java --version  

我尝试单独运行jrun,它没有提示exec:java:not found

我在谷歌看到类似的问题

但我已经使用了我能找到的所有解决方案

/opt/hypertable/current/bin/set-hadoop-distro.sh cdh4

得到

Hypertable successfully configured for Hadoop cdh4

所以如果有人能给我一个关于这个问题的提示,我将不胜感激

1 个答案:

答案 0 :(得分:0)

在开始群集之前,您必须运行:

cap fhsize -f Capfile.cluster

然后您可以检查是否已正确设置所有目录:

ls -laF /opt/hypertable/current/lib/java/*.jar

并且java版本也应该起作用

/opt/hypertable/current/bin/jrun -version

quick start中的更多信息。