重新启动处于关闭状态的节点

时间:2019-07-22 11:00:39

标签: centos slurm

断电后,我的节点进入关闭

状态

sinfo -a

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
partMain  up      infinite      4   down* node[001-004]
part1*    up      infinite      3   down* node[002-004]
part2     up      infinite      1   down* node001

我执行这些命令

 /etc/init.d/slurm stop
 /etc/init.d/slurm start

sinfo -a

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
partMain  up      infinite      4   down node[001-004]
part1*    up      infinite      3   down node[002-004]
part2     up      infinite      1   down node001

如何重新启动节点?


sinfo -R

REASON USER TIMESTAMP NODELIST Not responding root 2019-07-23T08:40:25 node[001-004]

$ scontrol update nodename=node001 state=idle
slurm_update error: Invalid user id

$ scontrol update nodename=node[001-004] state=resume
slurm_update error: Invalid user id

$service --status-all | grep 'slurm' 
slurmctld (pid 24000) is running... slurmdbd (pid 4113) is running...


$systemctl status -l slurm
● slurm.service - LSB: slurm daemon management
   Loaded: loaded (/etc/rc.d/init.d/slurm; bad; vendor preset: disabled)
   Active: failed (Result: exit-code) since Wed 2019-07-24 13:45:38 CEST; 257ms ago
     Docs: man:systemd-sysv-generator(8)
  Process: 30094 ExecStop=/etc/rc.d/init.d/slurm stop (code=exited, status=1/FAILURE)
  Process: 30061 ExecStart=/etc/rc.d/init.d/slurm start (code=exited, status=0/SUCCESS)
 Main PID: 30069 (code=exited, status=1/FAILURE)

2 个答案:

答案 0 :(得分:1)

在启动守护程序后尝试以下操作:

scontrol update nodename=node001 state=idle

答案 1 :(得分:0)

查看用sinfo -R将它们标记为向下的原因。它们很可能会被列为“意外重新启动”。您可以使用

恢复它们
scontrol update nodename=node[001-004] state=resume

ReturnToService的{​​{1}}参数控制计算节点从意外重启中唤醒时是否处于活动状态。