'More than one GPU in Google Colab

I was trying to run a Python code from GitHub on Google Colab, I need 2 GPUs to run the code, I installed Pytorch with this command: !conda install pytorch=0.4.1 cuda90 -c pytorch, When I try to run code, I get an error about cuda device:

Traceback (most recent call last):
  File "train.py", line 188, in <module>
    encoder.cuda(encoder_gpus[0])
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 258, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 185, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 185, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 191, in _apply
    param.data = fn(param.data)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 258, in <lambda>
    return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error (10): invalid device ordinal

Is that error because of the number of GPUs to run the code? how many GPUs are available in Google Colab? How can we access more than one GPU in Google Colab?( is it possible to access more Gpus with "Colab Pro")



Solution 1:[1]

You can only have 1 GPU in Colab.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Rexcirus