diff --git a/bash_array_jobs/README.md b/bash_array_jobs/README.md index 850a35f631d6c99f0b3a986b923075da6f6cebb3..c65b7fb93bcdddba68b57c76fe908ac00f0e1d26 100644 --- a/bash_array_jobs/README.md +++ b/bash_array_jobs/README.md @@ -1,3 +1,16 @@ +# Scenario # +Often times, one needs to submit a huge number of very similar jobs on the cluster. For example, if you want to run a software with same parameters over lots of subjects or if you want to run Quality Check on all the raw data that you have to process. + +# Methods # + +## Manual process## +The most basic method would be to create a [SLURM](https://docs.uabgrid.uab.edu/wiki/Slurm#Batch_Job) job script, and copy it over and over again, till you have an individual job script for each subject, and then you can make the minor tweak, so that each job is processing on the correct subject. + +*Problem*: Really time consuming. + +## Bash Scripting ## + + # Content # * **prep_env**: @@ -24,3 +37,7 @@ This is a [SLURM](https://docs.uabgrid.uab.edu/wiki/Slurm) Array batch script, w * **array_rand.job**: This is a [SLURM](https://docs.uabgrid.uab.edu/wiki/Slurm) Array batch script, which works when the names of input files are in random form. + + +# BASH scripting # +