'How does the dataset size affect the iteration speed when training a neural network model?
I train a CNN model on pytorch with datasets: small one which is consist of ~100000 images with annotations and the big one with ~3500000 (35 times larger). So my training speed become slower when I'm training model with the large dataset. It decreases from 60it/sec to 30it/sec. I use the same batch size, num workers and all other parameters. I thought that training speed should not depend on dataset size.
What could be the reasons for this behavior of the model?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
