-
Ravi Tripathi authorede0a7b679
Scenario
Often times, one needs to submit a huge number of very similar jobs on the cluster. For example, if you want to run a software with same parameters over lots of subjects or if you want to run Quality Check on all the raw data that you have to process.
Methods
Manual process##
The most basic method would be to create a SLURM job script, and copy it over and over again, till you have an individual job script for each subject, and then you can make the minor tweak, so that each job is processing on the correct subject.
Problem:
- Lots of manual work and really time consuming.
Bash Scripting
Second method would be to create a bash script, that will loop through the files in your DATA directory, and create a job script for each one, as well as submit it.
for file_name in `ls $data_dir`
do
#Do something here.
#Each loop would have a unique value (file name in $data_dir)
#that would be in $file_name variable
done
Usage:
./prep_env seq
./bash_parallel_script
Problem:
- This script would be great for a small number of files (20ish), not for large numbers.
- SLURM will have to keep track of and schedule each job/file, which is not very efficient for SLURM, so you might have to increase time between job submissions.
Array Jobs
A better method to achieve this is to use SLURM job arrays
Sequential File Names
If your input files are named sequentially, then you can utilize the environment variable ${SLURM_ARRAY_TASK_ID} to submit different files in different array tasks.
#!/bin/bash
#SBATCH --array=1-5
.
.
#SBATCH --job-name=test_job_%A_%a
#SBATCH --error=job_err/test_job_%A_%a.err
#SBATCH --output=job_out/test_job_%A_%a.out
.
.
srun echo "Processing file test$SLURM_ARRAY_TASK_ID" >> test_dir/test$SLURM_ARRAY_TASK_ID
srun sleep 30
- %A in the #SBATCH line becomes the job ID
- %a in the #SBATCH line becomes the array index
- ${SLURM_ARRAY_TASK_ID} is a shell variable that is set when the job runs, and each array task has unique value: 1, 2, .., 5
Usage: