From c424129841e529b8e368a10079ddd91c180daa32 Mon Sep 17 00:00:00 2001 From: Ravi Tripathi <ravi89@uab.edu> Date: Tue, 14 Aug 2018 17:59:58 -0500 Subject: [PATCH] Adding method2 of creating bash script in README --- bash_array_jobs/README.md | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/bash_array_jobs/README.md b/bash_array_jobs/README.md index 19a18a4..08df903 100644 --- a/bash_array_jobs/README.md +++ b/bash_array_jobs/README.md @@ -6,10 +6,22 @@ Often times, one needs to submit a huge number of very similar jobs on the clust ## Manual process## The most basic method would be to create a [SLURM](https://docs.uabgrid.uab.edu/wiki/Slurm#Batch_Job) job script, and copy it over and over again, till you have an individual job script for each subject, and then you can make the minor tweak, so that each job is processing on the correct subject. -*Problem*: Really time consuming. +*Problem:* +* Really time consuming. ## Bash Scripting ## - +Second method would be to create a bash script, that will loop through the files in your DATA directory, and create a job script for each one, as well as submit it. +``` +for file_name in `ls $data_dir` +do +#Do something here. Each loop would have a unique value (file name in $data_dir) that would be in $file_name variable +done +``` + +*Problem:* +* This script would be great for a small number of files (20ish), not for large numbers. +* SLURM will have to track each job/file, which is not very efficient for SLURM, so you might have to increase time between job submissions. + # Content # -- GitLab