From 6a42d3374d54549c78ab1e6eb2bbc15126ad3ceb Mon Sep 17 00:00:00 2001
From: Ravi Tripathi <ravi89@uab.edu>
Date: Tue, 14 Aug 2018 18:31:44 -0500
Subject: [PATCH] Adding method3 of creating SLURM array job script in README

---
 bash_array_jobs/README.md | 37 ++++++++++++++++++++++++++++++++++---
 1 file changed, 34 insertions(+), 3 deletions(-)

diff --git a/bash_array_jobs/README.md b/bash_array_jobs/README.md
index 08df903..ab366f7 100644
--- a/bash_array_jobs/README.md
+++ b/bash_array_jobs/README.md
@@ -7,20 +7,51 @@ Often times, one needs to submit a huge number of very similar jobs on the clust
 The most basic method would be to create a [SLURM](https://docs.uabgrid.uab.edu/wiki/Slurm#Batch_Job) job script, and copy it over and over again, till you have an individual job script for each subject, and then you can make the minor tweak, so that each job is processing on the correct subject.
 
 *Problem:*   
-* Really time consuming.
+* Lots of manual work and really time consuming.
 
 ## Bash Scripting ##
 Second method would be to create a bash script, that will loop through the files in your DATA directory, and create a job script for each one, as well as submit it.
 ```
 for file_name in `ls $data_dir`
 do
-#Do something here. Each loop would have a unique value (file name in $data_dir) that would be in $file_name variable
+#Do something here. 
+#Each loop would have a unique value (file name in $data_dir) 
+#that would be in $file_name variable
 done
 ```
 
+Usage:  
+```
+./prep_env seq
+./bash_parallel_script
+```
+
 *Problem:*  
 * This script would be great for a small number of files (20ish), not for large numbers.  
-* SLURM will have to track each job/file, which is not very efficient for SLURM, so you might have to increase time between job submissions.
+* SLURM will have to keep track of and schedule each job/file, which is not very efficient for SLURM, so you might have to increase time between job submissions.
+
+
+## Array Jobs ##
+A better method to achieve this is to use [SLURM job arrays](https://slurm.schedmd.com/job_array.html)   
+###Sequential File Names###
+If your input files are named sequentially, then you can utilize the environment variable ${SLURM_ARRAY_TASK_ID} to submit different files in different array tasks.
+```
+#!/bin/bash
+#SBATCH --array=1-5
+.
+.
+#SBATCH --job-name=test_job_%A_%a
+#SBATCH --error=job_err/test_job_%A_%a.err
+#SBATCH --output=job_out/test_job_%A_%a.out
+.
+.
+srun echo "Processing file test$SLURM_ARRAY_TASK_ID" >> test_dir/test$SLURM_ARRAY_TASK_ID
+srun sleep 30
+```
+* %A in the #SBATCH line becomes the job ID
+* %a in the #SBATCH line becomes the array index
+* ${SLURM_ARRAY_TASK_ID} is a shell variable that is set when the job runs, and each array task has unique value: 1, 2, .., 5
+
 
 
 # Content #
-- 
GitLab