Slurm sbatch on multiple nodes with 1 gpu for each one to parallelize cross validation
As object question, I am trying to refine lines of code in bash to start a job where I require 5 nodes with 1 gpu for each (and thus 1 task per node) in order to start a cross validation with 5 folds in parallel.
My lines of code for the moment look like this :