让Julia SharedArrays与Sun Grid Engine很好地配合使用

时间:2015-10-20 04:31:46

标签: multithreading parallel-processing julia sungridengine

我一直试图让Julia程序在bind_pe_procs() s的SGE环境中正确运行。我在Julia和SGE上阅读了几个主题,但其中大多数似乎都在处理MPI。来自this Gist的函数### define bind_pe_procs() as in Gist ### ... println("Started julia") bind_pe_procs() println("do SharedArrays initialize correctly?") x = SharedArray(Float64, 3, pids = procs(), init = S -> S[localindexes(S)] = 1.0) pids = procs(x) println("number of workers: ", length(procs())) println("SharedArrays map to ", length(pids), " workers") 似乎正确地将进程绑定到本地环境。像

这样的脚本
starting qsub script file
Mon Oct 12 15:13:38 PDT 2015
calling mpirun now 
exception on 2: exception on exception on 4: exception on exception on 53: : exception on exception on exception on Started julia
parsing PE_HOSTFILE
[{"name"=>"compute-0-8.local","n"=>"5"}]compute-0-8.local
ASCIIString["compute-0-8.local","compute-0-8.local","compute-0-8.local","compute-0-8.local"]adding machines to current system
done
do SharedArrays initialize correctly?
number of workers: 5
SharedArrays map to 5 workers

产生以下输出:

SharedArray

奇怪的是,如果我需要从文件加载数组并使用命令convert(SharedArray, vec(readdlm(FILEPATH)))转换为println("Started julia") bind_pe_procs() ### script reads arrays from file and converts to SharedArrays println("running script...") my_script() 格式,这似乎不起作用。如果脚本是

starting qsub script file
Mon Oct 19 09:18:29 PDT 2015
calling mpirun now Started julia
parsing PE_HOSTFILE
[{"name"=>"compute-0-5.local","n"=>"11"}]compute-0-5.local
ASCIIString["compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0- 5.local"]adding machines to current system
done
running script...
Current number of processes: [1,2,3,4,5,6,7,8,9,10,11]
SharedArray y is seen by [1] processes
### tons of errors here
### important one is "SharedArray cannot be used on a non-participating process"

然后结果是垃圾:

{{1}}

所以不知何故,SharedArrays无法正确映射到所有核心。有没有人对这个问题有任何建议或见解?

1 个答案:

答案 0 :(得分:0)

我在工作中使用的一种解决方法是简单地强制SGE向特定节点提交作业,然后将并行环境限制为我想要使用的核心数。

下面我提供一个24核节点的SGE qsub脚本,我只想使用6个核心。

#!/bin/bash
# lots of available SGE script options, only relevant ones included below

# request processes in parallel environment 
#$ -pe orte 6 

# use this command to dump job on a particular queue/node
#$ -q all.q@compute-0-13

/share/apps/julia-0.4.0/bin/julia -p 5 MY_SCRIPT.jl

Pro:与SharedArray很好地配合。 Con:作业将在队列中等待,直到节点有足够的可用核心。