'How to use cpu memory as additional memory for GPU
I am trying to train a model written specifically in pytorch that requires a lot of memory and my CPU has more memory and can handle a larger batch size, but the GPU is much faster but limited in memory. So I want to add the memory in the CPU as usable memory for the GPU somehow. Although I'm aware batch size is the main factor but tweaking the batch is not an option for as I want to know the result for a specific batch size.
I found the exact problem at pytorch discussion page, turns out there is a module called "pytorch large model support" especially designed for swapping between CPU and GPU memory but it is not maintained for latest versions of pytorch and cuda-toolkit so I'm having trouble with its compatiability.
Any kind of help is appreciated either already existing library or a workaround.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
