'How to perform TensorFlow quantization-aware training on non tf.kears.Model type model?
I want to quantize a model that is of a wrapper Model class and not of tf.keras.Model type. It has the forward(), loss(), trainable_variables() etc. functions, and the backpropagation is done with GradientTpe. Does anyone have any experience doing quantization in such a scenario? Thank you!
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
