'Use all 24 Gb for one application on the K80 GPU

The Tesla K80 has 24 Gb but, as far as I understand, it is shared between two GK210 GPUs on the same card. So actually it's a card with two 12 Gb GPUs. But is it still possible to use all 24 Gb for one application fx training a large model in PyTorch or Keras?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source