纱线如何决定处于Accepeted状态的职位

时间:2019-08-16 11:52:03

标签: apache-spark hadoop yarn

试图了解哪些纱线配置选项可以控制多少个作业处于“已接受”状态? 我正在使用容量调度程序。

在集群中,我看到处于“接受”状态然后处于“运行”状态的作业有10次。我希望接受的工作仅是“正在运行”工作总数的2-3倍。

Followwing是我的纱线调度程序配置。

yarn.scheduler.capacity.default.minimum-user-limit-percent=100
yarn.scheduler.capacity.maximum-am-resource-percent=0.33
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=0
yarn.scheduler.capacity.*.accessible-node-labels=*
yarn.scheduler.capacity.*.acl_administer_queue=*
yarn.scheduler.capacity.*.capacity=100

yarn.scheduler.capacity.*.default.acl_administer_jobs=*
yarn.scheduler.capacity.*.default.acl_submit_applications=*
yarn.scheduler.capacity.*.default.capacity=50
yarn.scheduler.capacity.*.default.maximum-capacity=100
yarn.scheduler.capacity.*.default.state=RUNNING
yarn.scheduler.capacity.*.default.user-limit-factor=10

yarn.scheduler.capacity.*.queues=default,tft
yarn.scheduler.capacity.*.tft.capacity=50
yarn.scheduler.capacity.*.tft.maximum-capacity=100
yarn.scheduler.capacity.*.tft.user-limit-factor=10

0 个答案:

没有答案