'How to run PyTorch inference on multiple models in parallel?
I have 16 models (3 layer neural networks) with different parameters. I want to load all 16 models to device and run inference of 16 different inputs on the 16 models in parallel. Is there a way to do this in PyTorch?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
