我有一个非常复杂的管道,我需要在一个Slurm集群上运行,但是无法使其正常工作。
由于某种原因,管道适用于较小的工作,但是一旦我添加了更多输入文件以进行更严格的测试,它就不会正确完成。
我还没有一个简单的例子(因为)是工作日的结束,如果这个问题不容易解决,我将添加一个例子。
因此,发生了以下情况:
sbatch ../slurm_eating_snakemake.sh
提交我的主要snakemake“守护程序”工作,又称以下脚本:#!/usr/bin/env bash
# Jobname
#SBATCH --job-name=SNEKHEAD
#
# Project
#SBATCH --account=nn3556k
#
# Wall clock limit
#SBATCH --time=24:00:00
#
# Max memory usage:
#SBATCH --mem-per-cpu=16G
## set up job environment
source /usit/abel/u1/caspercp/Software/snek/bin/activate
module purge # clear any inherited modules
#set -o errexit # exit on errors (turned off, so all jobs are cancelled in event of crash)
## copy input files
cp -R /usit/abel/u1/caspercp/nobackup/DATA/ $SCRATCH
cp -R /usit/abel/u1/caspercp/lncrna_thesis_prj/src/snakemake_pipeline/ $SCRATCH
#cp -R $SUBMITDIR\/OUTPUTS/ $SCRATCH
## Do some work:
cd $SCRATCH\/snakemake_pipeline
echo $(date) >> ../bash_tims.txt
# run pipeline
snakemake --snakefile start.snakefile -pr --runtime-profile ../timings.txt --cluster "sbatch -A nn3556k --time=24:00:00 --mem-per-cpu=4G -d after:"$SLURM_JOB_ID -j 349 --restart-times 1
echo $(date) >> ../bash_tims.txt
## Make sure the results are copied back to the submit directory:
cp -R $SCRATCH\/OUTPUTS/ $SUBMITDIR
cp -R $SCRATCH\/snakemake_pipeline/.snakemake/ $SUBMITDIR
mkdir $SUBMITDIR\/child_logs/
cp $SCRATCH\/snakemake_pipeline/slurm-*.out $SUBMITDIR\/child_logs/
cp $SCRATCH\/OUTPUTS/output.zip $SUBMITDIR
cp $SCRATCH\/timings.txt $SUBMITDIR
cp $SCRATCH\/bash_tims.txt $SUBMITDIR
# CANCEL ALL JOBS IN EVENT OF CRASH (or on exit, but it should not matter at that point.)
scancel -u caspercp
如果您想了解具体情况,我正在使用abel群集:https://www.uio.no/english/services/it/research/hpc/abel/
n
次数调用某些规则来滥用snakemake(请参阅:https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#data-dependent-conditional-execution )。这是我觉得整个事情都崩溃的地方。由于某些原因,检查点完成后,snakemake无法提交新作业。我收到以下错误(snakemake输出的子集):
[Thu Mar 14 17:46:27 2019]
checkpoint split_up_genes_each_sample_lnc:
input: ../OUTPUTS/prepped_datasets/expression_table_GSEA_Stopsack-HALLMARK_IL6_JAK_STAT3_SIGNALING.txt
output: ../OUTPUTS/control_txts/custom_anno/expression_table_GSEA_Stopsack-HALLMARK_IL6_JAK_STAT3_SIGNALING-human-BP/
jobid: 835
reason: Missing output files: ../OUTPUTS/control_txts/custom_anno/expression_table_GSEA_Stopsack-HALLMARK_IL6_JAK_STAT3_SIGNALING-human-BP/; Input files updated by another job: ../OUTPUTS/prepped_datasets/expression_table_GSEA_Stopsack-HALLMARK_IL6_JAK_STAT3_SIGNALING.txt
wildcards: expset=expression_table_GSEA_Stopsack, geneset=HALLMARK_IL6_JAK_STAT3_SIGNALING, organism=human, ontology=BP
Downstream jobs will be updated after completion.
Error submitting jobscript (exit code 1):
Updating job 655.
[Thu Mar 14 17:46:43 2019]
Finished job 896.
95 of 1018 steps (9%) done
Updating job 539.
[Thu Mar 14 17:47:24 2019]
Finished job 780.
96 of 1022 steps (9%) done
Updating job 643.
.......
[Thu Mar 14 17:51:35 2019]
Finished job 964.
203 of 1451 steps (14%) done
Updating job 677.
[Thu Mar 14 17:51:46 2019]
Finished job 918.
204 of 1455 steps (14%) done
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /work/jobs/26276509.d/snakemake_pipeline/.snakemake/log/2019-03-14T172923.764021.snakemake.log
粗略地说,检查点的作用是将每个基因的带有基因的输出txt(即基因集文件)分成单独的文件,称为{gene}.txt
。所以我可以将其输入我的分析算法中。
但是我对这个错误“ Error submitting jobscript (exit code 1):
”感到非常困惑,它并没有为故障排除提供明确的指导。
感谢您的任何输入!
Edit1(更多信息):
(snek) -bash-4.1$ pip freeze --local
appdirs==1.4.3
attrs==19.1.0
certifi==2019.3.9
chardet==3.0.4
ConfigArgParse==0.14.0
Cython==0.29.6
datrie==0.7.1
docutils==0.14
gitdb2==2.0.5
GitPython==2.1.11
idna==2.8
jsonschema==3.0.1
numpy==1.16.2
pandas==0.24.1
pyrsistent==0.14.11
python-dateutil==2.8.0
pytz==2018.9
PyYAML==3.13
ratelimiter==1.2.0.post0
requests==2.21.0
six==1.12.0
smmap2==2.0.5
snakemake==5.4.3
urllib3==1.24.1
wrapt==1.11.1
yappi==1.0