'Is there a way to upscale an image using a layer in a machine learning model?

For instance, consider I have a (32,32,1) input grayscale image. I want to use EfficientNet or any other pre-trained model to classify the data (or basically use transfer learning of any sort). Since EfficientNet takes a minimum of (75,75,3) sized images, is it possible for to upscale my image using ONLY model weights?

For example, any combination of Conv2D, Dense etc which can work for my use case.



Solution 1:[1]

Conv2D would only decrease the size of the image.

You could use a 'deconvolution' layer: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2DTranspose which is a trainable layer ; for example strides=(3,3) multiplies by 3 the width and the height of the image.

An example of use is given in https://www.tensorflow.org/tutorials/generative/dcgan

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Pierrotb1