'Handling operations with large tensors and memory usage in tensorflow
I'm trying to perform outer concatenation in Tensorflow by combining two 2D tensors into a third, so that two m by n tensors combine into an m by m by n^2 tensor. In the past when I've made a new tensor, I've used the entire data set and preallocated the space, so that here I would have a tensor of the total number of samples, S, times the other dimensions (S by m by m by n^2). This takes up too much memory. What are my options for processing an outer concatenation without overloading my RAM? Should I be making this an individual layer, for instance, and if so, how? Thank you for any advice provided.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
