'A computational graph with a common node in PyTorch is it wasting memory?
This image shows my computational graph(Something like a DCGAN). And I first call backward on the last intermediate node of G1 with retain_graph=true. And then call backward on the last intermediate node of G2 with retain_graph=false.
My question is graph G1 still in memory?
I want to train my network in N epochs. How many G1 graphs are still in memory?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|

