'Building PyTorch with older CUDA version?

The reason why I'm interested in this is because of the available hardware at my workplace. We have GTX 1080 but also RTX 3080 and not everyone has access to more modern GPUs. Nvidia has launched it's Docker support, allowing building and running CUDA code in containers, which is great since - as long as the driver supports the specific CUDA version - one can easily migrate software between multiple generations of GPUs. However this doesn't solve the problem of popular frameworks and tools (such as PyTorch) moving forward and cutting support for old hardware.

Currently the official minimum requirement is CUDA 10.2. However from my experience (not only with CUDA but in general with libraries) developers would increase the version requirements not necessarily because they use new features but because of easier setup, available support or QoL improvements. I am unable to find any information on the actual compute requirements of PyTorch, which leads me to believe that there might be a way to backport it to older CUDA versions (in my case 8.0 but the lower, the better).

Does anyone know of such attempts or at least provide more details on the actual CUDA requirements (not just the version) of PyTorch?

PS: Yes, I do know I can go with a much older version of PyTorch but that's not the point here.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source