在Ubuntu 16.04上模拟SLURM

时间:2017-10-29 16:48:55

标签: linux docker vagrant ubuntu-16.04 slurm

我想在Ubuntu 16.04上模仿SLURM。我不需要认真的资源管理,我只想测试一些简单的例子。我cannot install SLURM in the usual way,我想知道是否还有其他选择。我尝试过的其他事情:

  • A Docker image。不幸的是,docker pull agaveapi/slurm; docker run agaveapi/slurm给了我错误:

    /usr/lib/python2.6/site-packages/supervisor/options.py:295:UserWarning:Supervisord以root身份运行,它在默认位置(包括当前工作目录)中搜索其配置文件;你可能想要指定一个" -c"参数指定配置文件的绝对路径以提高安全性。   ' Supervisord以root身份运行,正在搜索' 2017-10-29 15:27:45,436 CRIT Supervisor以root身份运行(配置文件中没有用户) 2017-10-29 15:27:45,437 INFO supervisord以pid 1开始 2017-10-29 15:27:46,439 INFO催生:' slurmd'用pid 9 2017-10-29 15:27:46,441 INFO催生:' sshd'用pid 10 2017-10-29 15:27:46,443 INFO催生:' munge'用pid 11 2017-10-29 15:27:46,443 INFO催生:' slurmctld'用pid 12 2017-10-29 15:27:46,452 INFO退出:munge(退出状态0;不预期) 2017-10-29 15:27:46,452 CRIT收获未知的pid 13) 2017-10-29 15:27:46,530 INFO放弃了:munge进入FATAL状态,太多开始重试太快了 2017-10-29 15:27:46,531 INFO退出:slurmd(退出状态1;未预料到) 2017-10-29 15:27:46,535 INFO放弃了:slurmd进入FATAL状态,太多开始重试太快 2017-10-29 15:27:46,536 INFO退出:slurmctld(退出状态0;未预料到) 2017-10-29 15:27:47,537 INFO成功:sshd进入RUNNING状态,进程一直保持>超过1秒(startsecs) 2017-10-29 15:27:47,537 INFO放弃了:slurmctld进入致命状态,太多开始重试太快

  • This guide to start a SLURM VM via Vagrant。我试过了,但复制munge键超时了。

    sudo scp /etc/munge/munge.key vagrant @ server:/ home / vagrant / ssh:连接到主机服务器端口22:连接超时 失去联系

2 个答案:

答案 0 :(得分:1)

我仍然更喜欢本地运行SLURM,但我陷入困境并开始使用Debian 9.2 VM。有关我对本机安装进行故障排除的工作,请参阅heredirections here工作顺利,但我需要对slurm.conf进行以下更改。下面,Debian64hostnamewlandau是我的用户名。

  • ControlMachine=Debian64
  • SlurmUser=wlandau
  • NodeName=Debian64

这是完整的slurm.conf。类似的slurm.conf在我的原生Ubuntu 16.04上不起作用。

# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=Debian64
#ControlAddr=
#BackupController=
#BackupAddr=
# 
AuthType=auth/munge
#CheckpointType=checkpoint/none 
CryptoType=crypto/munge
#DisableRootJobs=NO 
#EnforcePartLimits=NO 
#Epilog=
#EpilogSlurmctld= 
#FirstJobId=1 
#MaxJobId=999999 
#GresTypes= 
#GroupUpdateForce=0 
#GroupUpdateTime=600 
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint 
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0 
#JobRequeue=1 
#JobSubmitPlugins=1 
#KillOnBadExit=0 
#LaunchType=launch/slurm 
#Licenses=foo*4,bar 
#MailProg=/usr/bin/mail 
#MaxJobCount=5000 
#MaxStepCount=40000 
#MaxTasksPerNode=128 
MpiDefault=none
#MpiParams=ports=#-# 
#PluginDir= 
#PlugStackConfig= 
#PrivateData=jobs 
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags= 
#PrologSlurmctld= 
#PropagatePrioProcess=0 
#PropagateResourceLimits= 
#PropagateResourceLimitsExcept= 
#RebootProgram= 
ReturnToService=1
#SallocDefaultCommand= 
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root 
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree 
#TmpFS=/tmp 
#TrackWCKey=no 
#TreeWidth= 
#UnkillableStepProgram= 
#UsePAM=0 
# 
# 
# TIMERS 
#BatchStartTimeout=10 
#CompleteWait=0 
#EpilogMsgTime=2000 
#GetEnvTimeout=2 
#HealthCheckInterval=0 
#HealthCheckProgram= 
InactiveLimit=0
KillWait=30
#MessageTimeout=10 
#ResvOverRun=0 
MinJobAge=300
#OverTimeLimit=0 
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60 
#VSizeFactor=0 
Waittime=0
# 
# 
# SCHEDULING 
#DefMemPerCPU=0 
FastSchedule=1
#MaxMemPerCPU=0 
#SchedulerRootFilter=1 
#SchedulerTimeSlice=30 
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
# 
# 
# JOB PRIORITY 
#PriorityFlags= 
#PriorityType=priority/basic 
#PriorityDecayHalfLife= 
#PriorityCalcPeriod= 
#PriorityFavorSmall= 
#PriorityMaxAge= 
#PriorityUsageResetPeriod= 
#PriorityWeightAge= 
#PriorityWeightFairshare= 
#PriorityWeightJobSize= 
#PriorityWeightPartition= 
#PriorityWeightQOS= 
# 
# 
# LOGGING AND ACCOUNTING 
#AccountingStorageEnforce=0 
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags= 
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerType=job_container/none 
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile= 
#SlurmSchedLogLevel= 
# 
# 
# POWER SAVE SUPPORT FOR IDLE NODES (optional) 
#SuspendProgram= 
#ResumeProgram= 
#SuspendTimeout= 
#ResumeTimeout= 
#ResumeRate= 
#SuspendExcNodes= 
#SuspendExcParts= 
#SuspendRate= 
#SuspendTime= 
# 
# 
# COMPUTE NODES 
NodeName=Debian64 CPUs=1 RealMemory=744 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN 
PartitionName=debug Nodes=Debian64 Default=YES MaxTime=INFINITE State=UP

答案 1 :(得分:1)

所以...我们这里有一个现有的集群,但它运行的是较旧的Ubuntu版本,与我的工作站运行17.04无法很好地匹配。

因此,在我的工作站上,我确保安装了slurmctld(后端)和slurmd,然后使用

设置了一个简单的slurm.conf
ControlMachine=mybox
# ...
NodeName=DEFAULT CPUs=4 RealMemory=4000 TmpDisk=50000 State=UNKNOWN
NodeName=mybox CPUs=4 RealMemory=16000

之后我重新启动slurmcltd,然后slurmd。现在一切都很好:

root@mybox:/etc/slurm-llnl$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
demo         up   infinite      1   idle mybox
root@mybox:/etc/slurm-llnl$ 

这是一个退化设置,我们真正的设置混合了dev和prod机器以及适当的分区。但这应该回答你的“可以后端真的是客户”的问题。另外,我的机器实际上并没有被称为mybox,但在任何一种情况下都与这个问题无关。

使用Ubuntu 17.04,所有股票与munge进行通信(无论如何都是默认的)。

编辑:即:

me@mybox:~$ COLUMNS=90 dpkg -l '*slurm*' | grep ^ii
ii  slurm-client     16.05.9-1ubun amd64         SLURM client side commands
ii  slurm-wlm-basic- 16.05.9-1ubun amd64         SLURM basic plugins
ii  slurmctld        16.05.9-1ubun amd64         SLURM central management daemon
ii  slurmd           16.05.9-1ubun amd64         SLURM compute node daemon
me@mybox:~$