当我在cdh集群上启动H2o时,我收到以下错误。我从wbesite下载了所有内容,并按照教程进行操作。我跑的命令是
hadoop jar h2odriver.jar -nodes 2 -mapperXmx 1g -output hdfsOutputDirName
它表明没有使用容器。目前尚不清楚这些设置将在hadoop上进行。我已经给了所有设置记忆。内存的0.0是没有意义的,为什么容器没有使用内存。集群现在是否正在运行?
----- YARN cluster metrics -----
Number of YARN worker nodes: 3
----- Nodes -----
Node: http://data-node-3:8042 Rack: /default, RUNNING, 1 containers used, 1.0 / 6.0 GB used, 1 / 4 vcores used
Node: http://data-node-1:8042 Rack: /default, RUNNING, 0 containers used, 0.0 / 6.0 GB used, 0 / 4 vcores used
Node: http://data-node-2:8042 Rack: /default, RUNNING, 0 containers used, 0.0 / 6.0 GB used, 0 / 4 vcores used
----- Queues -----
Queue name: root.default
Queue state: RUNNING
Current capacity: 0.00
Capacity: 0.00
Maximum capacity: -1.00
Application count: 0
Queue 'root.default' approximate utilization: 0.0 / 0.0 GB used, 0 / 0 vcores used
----------------------------------------------------------------------
WARNING: Job memory request (2.2 GB) exceeds queue available memory capacity (0.0 GB)
WARNING: Job virtual cores request (2) exceeds queue available virtual cores capacity (0)
----------------------------------------------------------------------
For YARN users, logs command is 'yarn logs -applicationId application_1462681033282_0008'
答案 0 :(得分:3)
您应该将默认队列设置为具有运行2nodes群集的可用资源。
见警告:
WARNING: Job memory request (2.2 GB) exceeds queue available memory capacity (0.0 GB)
WARNING: Job virtual cores request (2) exceeds queue available virtual cores capacity (0)
请检查YARN文档 - 例如容量调度程序的设置和最大可用资源: https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html
答案 1 :(得分:0)
我在Cloudera Manager纱线配置中进行了以下更改
Setting Value
yarn.scheduler.maximum-allocation-vcores 8
yarn.nodemanager.resource.cpu-vcores 4
yarn.nodemanager.resource.cpu-vcores 4
yarn.scheduler.maximum-allocation-mb 16 GB