背景
我编写了一个python脚本来将文件从格式转换为另一种格式。此代码使用文本文件(subject_list.txt
)作为输入,并遍历该文本文件中列出的源目录名称(数百个目录,每个目录包含数千个文件),转换其内容并将其存储在指定的输出目录中。
问题:
为了节省时间,我想在高性能集群(HPC)上使用此脚本并创建作业以并行转换文件,而不是按顺序迭代列表中的每个目录。
我是python和HPC的新手。我们的实验室以前主要在BASH编写过,并且无法访问HPC环境,但我们最近获得了对HPC的访问权限,并且已决定切换到Python,所以一切都很新。
问题:
python中是否有一个模块允许我在python脚本中创建作业?我已经在multiprocessing和subprocess python模块上找到了文档,但我不清楚如何使用它们。或者我应该采取不同的方法?我还在stackoverflow上阅读了一些关于将slurm和python一起使用的帖子,但是由于信息太多而没有足够的知识来区分哪个线程可以接收,所以我感到很不舒服。非常感谢任何帮助。
环境:
HPC:红帽企业Linux服务器版本7.4(Maipo)
python3 / 3.6.1
slurm 17.11.2
守护部分代码:
# Change this for your study
group="labname"
study="studyname"
# Set paths
archivedir="/projects" + group + "/archive"
sourcedir="/projects/" + group + "shared/DICOMS/" + study
niidir="/projects/" + group + "/shared/" + study + archivedir + "/clean_niftis"
outputlog=niidir + "/outputlog_convert.txt"
errorlog=niidir + "/errorlog_convert.txt"
dcm2niix="/projects/" + group + "/shared/dcm2niix/build/bin/dcm2niix"
# Source the subject list (needs to be in your current working directory)
subjectlist="subject_list.txt"
# Check/create the log files
def touch(path): # make a function:
with open(path, 'a'): # open it in append mode, but don't do anything to it yet
os.utime(path, None) # make the file
if not os.path.isfile(outputlog): # if the file does not exist...
touch(outputlog)
if not os.path.isfile(errorlog):
touch(errorlog)
第一部分坚持:
with open(subjectlist) as file:
lines = file.readlines()
for line in lines:
subject=line.strip()
subjectpath=sourcedir+"/"+subject
if os.path.isdir(subjectpath):
with open(outputlog, 'a') as logfile:
logfile.write(subject+os.linesep)
# Submit a job to the HPC with sbatch. This next line was not in the
# original script that works, and it isn't correct, but it captures
# the gist of what I am trying to do (written in bash).
sbatch --job-name dcm2nii_"${subject}" --partition=short --time 00:60:00 --mem-per-cpu=2G --cpus-per-task=1 -o "${niidir}"/"${subject}"_dcm2nii_output.txt -e "${niidir}"/"${subject}"_dcm2nii_error.txt
# This is what I want the job to do for the files in each directory:
subprocess.call([dcm2niix, "-o", "-b y", niidir, subjectpath])
else:
with open(errorlog, 'a') as logfile:
logfile.write(subject+os.linesep)
编辑1:
dcm2niix是用于转换的软件,可在HPC上获得。它需要以下标志和路径-o -b y ouputDirectory sourceDirectory
。
编辑2(解决方案):
with open(subjectlist) as file:
lines = file.readlines() # set variable name to file and read the lines from the file
for line in lines:
subject=line.strip()
subjectpath=dicomdir+"/"+subject
if os.path.isdir(subjectpath):
with open(outputlog, 'a') as logfile:
logfile.write(subject+os.linesep)
# Create a job to submit to the HPC with sbatch
batch_cmd = 'sbatch --job-name dcm2nii_{subject} --partition=short --time 00:60:00 --mem-per-cpu=2G --cpus-per-task=1 -o {niidir}/{subject}_dcm2nii_output.txt -e {niidir}/{subject}_dcm2nii_error.txt --wrap="/projects/{group}/shared/dcm2niix/build/bin/dcm2niix -o {niidir} {subjectpath}"'.format(subject=subject,niidir=niidir,subjectpath=subjectpath,group=group)
# Submit the job
subprocess.call([batch_cmd], shell=True)
else:
with open(errorlog, 'a') as logfile:
logfile.write(subject+os.linesep)
答案 0 :(得分:0)
这是您的代码的可能解决方案。它尚未经过测试。
with open(subjectlist) as file:
lines = file.readlines()
for line in lines:
subject=line.strip()
subjectpath=sourcedir+"/"+subject
if os.path.isdir(subjectpath):
with open(outputlog, 'a') as logfile:
logfile.write(subject+os.linesep)
# Submit a job to the HPC with sbatch. This next line was not in the
# original script that works, and it isn't correct, but it captures
# the gist of what I am trying to do (written in bash).
cmd = 'sbatch --job-name dcm2nii_{subject} --partition=short --time 00:60:00\
--mem-per-cpu=2G --cpus-per-task=1 -o {niidir}/{subject}_dcm2nii_output.txt\
-e {niidir}/{subject}_dcm2nii_error.txt\
--wrap="dcm2niix -o -b y {niidir} {subjectpath}"'.format(subject=subject,niidir=,subjectpath=subjectpath)
# This is what I want the job to do for the files in each directory:
subprocess.call([cmd], shell=True)
else:
with open(errorlog, 'a') as logfile:
logfile.write(subject+os.linesep)