'How to find the size of a deep learning model?
I am working with different quantized implementations of the same model, the main difference being the precision of the weights, biases, and activations. So I'd like to know how I can find the difference between the size of a model in MBs that's in say 32-bit floating point, and one that's in int8. I have the models saved in .PTH format.
Solution 1:[1]
You are able to calculate the number of parameters and buffers. Then try to multiply them with the element size and you will have the size of all parameters.
model = models.resnet18()
param_size = 0
buffer_size = 0
for param in model.parameters():
param_size += param.nelement() * param.element_size()
for buffer in model.buffers():
buffer_size += buffer.nelement() * buffer.element_size()
size_all_mb = (param_size + buffer_size) / 1024**2
print('Size: {:.3f} MB'.format(size_all_mb))
And it will print:
Size: 361.209 MB
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Leonard |
