'Optimal way of assigning SLURM tasks/nodes/cpus/arrays for many independent jobs
I'd like to run 100 python simulations independently. The bottelneck in each simulation is handled by numpy calculations. I'm trying something like
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=2
#SBATCH --array=1-100
srun python -u mycode.py
but I'm really confused about the best way to go about this. Should I assign more tasks per node? Should I use more cpus per tasks? I don't have a very good understanding about how these things are related.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
