在贫民窟中使用--array参数运行并行作业

时间:2015-02-12 20:20:43

标签: slurm

我试图学习slurm系统,但我在理解方面遇到了一些麻烦。我试图在sbatch中使用--array参数并行运行一堆作业。我希望将作业分散到多个节点上,但考虑到时间戳,它们似乎都在同一节点上运行。

我正在使用的sbatch命令:

sbatch  -N 10 -a 0-19 --cpus-per-task 10 test.sh

正在运行的test.sh文件:

#!/usr/bin/env bash

#SBATCH -o test_%a.out
#SBATCH -p all.q
#SBATCH --time=1:00:00

srun --cpus-per-task 10 -k --exclusive --ntasks 1 -N 1 echo "`date ` array_index: $SLURM_ARRAY_TASK_ID node: $SLURM_NODEID requested nodes: $SLURM_NNODES  `sleep 3`" 

输出文件:

Thu Feb 12 19:51:28 UTC 2015 array_index: 0 node: 0 requested nodes: 10
Thu Feb 12 19:51:45 UTC 2015 array_index: 10 node: 0 requested nodes: 10
Thu Feb 12 19:51:45 UTC 2015 array_index: 11 node: 0 requested nodes: 10
Thu Feb 12 19:51:49 UTC 2015 array_index: 12 node: 0 requested nodes: 10
Thu Feb 12 19:51:49 UTC 2015 array_index: 13 node: 0 requested nodes: 10
Thu Feb 12 19:51:52 UTC 2015 array_index: 14 node: 0 requested nodes: 10
Thu Feb 12 19:51:52 UTC 2015 array_index: 15 node: 0 requested nodes: 10
Thu Feb 12 19:51:56 UTC 2015 array_index: 16 node: 0 requested nodes: 10
Thu Feb 12 19:51:56 UTC 2015 array_index: 17 node: 0 requested nodes: 10
Thu Feb 12 19:51:59 UTC 2015 array_index: 18 node: 0 requested nodes: 10
Thu Feb 12 19:51:59 UTC 2015 array_index: 19 node: 0 requested nodes: 10
Thu Feb 12 19:51:28 UTC 2015 array_index: 1 node: 0 requested nodes: 10
Thu Feb 12 19:51:32 UTC 2015 array_index: 2 node: 0 requested nodes: 10
Thu Feb 12 19:51:32 UTC 2015 array_index: 3 node: 0 requested nodes: 10
Thu Feb 12 19:51:35 UTC 2015 array_index: 4 node: 0 requested nodes: 10
Thu Feb 12 19:51:35 UTC 2015 array_index: 5 node: 0 requested nodes: 10
Thu Feb 12 19:51:39 UTC 2015 array_index: 6 node: 0 requested nodes: 10
Thu Feb 12 19:51:39 UTC 2015 array_index: 7 node: 0 requested nodes: 10
Thu Feb 12 19:51:42 UTC 2015 array_index: 8 node: 0 requested nodes: 10
Thu Feb 12 19:51:42 UTC 2015 array_index: 9 node: 0 requested nodes: 10

2 个答案:

答案 0 :(得分:0)

SLURM的作业阵列支持首先完成一个批处理作业,然后继续进行第二个批处理作业。所以在这里,第一个批处理脚本(比如说$JOB_ID.0)将完成(只有一个srun命令),然后第二个脚本将启动等。这些只是串行工作。

您可以拥有一个批处理作业,并且该作业中有多个srun命令。这将跨越你想要的多个节点。

答案 1 :(得分:0)

这是一个跨节点传播阵列作业的简单脚本......这里的技巧是将1个任务分配给多个核心(在下面的脚本中,我们使用16个核心......所以64核机器将获得4个任务 - 你可以根据需要改变它。我调用了文件" job_array.sbatch"并且可以使用" sbatch -a 1-20 job_array.sbatch"来调用它。 (或者您想使用的任何数组元素):

#!/bin/bash
#
# invoke using sbatch -a 1-20 job_array.sbatch
#
#SBATCH -n 16 # this requests 16 cores per task, which will effectively spread the job across nodes
#SBATCH -N 1 # on one machine
#SBATCH -J job_array
#SBATCH -t 00:00:30
#
date
echo ""
echo "job_array.sbatch"
echo "  Run several instances of a job using a single script."
#
#  Each job has values of certain environment variables.
#
echo "  SLURM_ARRAY_JOB_ID = " $SLURM_ARRAY_JOB_ID
echo "  SLURM_ARRAY_TASK_ID = " $SLURM_ARRAY_TASK_ID
echo "  SLURM_JOB_ID = " $SLURM_JOB_ID
echo "  SLURM_NODELIST = " $SLURM_NODELIST
#
#  Terminate.
#
echo ""
echo "job_array.sbatch:"
echo "  Normal end of execution."
date
#
exit