'SLRUM: how to limit CPU number for specific node in one partition when node is in 2 partitions?
Actually, I found a very similar question to mine. The only difference is that the number of CPUs of the nodes in my small cluster are different. (The similar question is here)
For example, the nodes in my cluster are:
- node1, 36 CPUs
- node2, 32 CPUs
- node3, 24 CPUs + 1 GPU
- node4, 16 CPUs + 1 GPU
I have 2 partitions: cpu (all nodes) and gpu (node3,4).
How to leave 4 CPUs in node3 and node4 for gpu partition? In other word, how to configure so that cpu partition includes all CPUs in node1 and node2, 20 CPUs in node3 and 12 CPUs in node4?
(The parameter MaxCPUsPerNode doesn't meet my demand.)
Thanks!
Solution 1:[1]
Using the consumable trackable resources plugin (https://slurm.schedmd.com/cons_res.html) instead of the default node allocation plugin, you can set DefCpuPerGPU to 4 (see details on setting this variable and enabling cons_tres in your slurm.conf documentation here: https://slurm.schedmd.com/cons_res.html#using_cons_tres)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | pcamach2 |
