'Using different GPU types with SLURM on the same node

I'm setting up a GPU "cluster" used by a group of engineer as a pool of resources to train DL models on. We don't expect to use multiple GPUs per job.

From the documentation, it's seems possible to have different types of GPUs on a same node. But I often heard it was not recommended.

Any specific reason why one shouldn't have an heterogenous GPU configuration on the same compute node?

Note: In my case, it's mixing an A100 with 3 A10 GPUs.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source