'Creating dataset from Multiple Hyperspectral Images of different spatial dimension for object classification using deep learning models

I have hyperspectral images, containing different spatial dimensions but same spectral bands. I want to make all these image dimensions into standard format so that I would be able to create a dataset and apply it on deep learning models such as R-CNN, RESNET, GAN etc.

My data includes HS ( Hyperspectral ) images of varied spatial dimensions e.g. HSI-1 = (340,500,700), HSI-2=(245,240,700)

As neural networks require images to be of same dimension e.g (256,256), it is fairly easy to resize an rgb image using different machine vision techniques and create a consistent data.

In the case of Hyperspectral Images, I can not use the same techniques used for RGB images for e.g. using "opencv" functions to resize the image. Moreover, I believe using any sort of such technique may effect spectral channel information. I've tried to simply crop the image setting the range of pixels for e.g. (256,256) starting from center of the image region, but obviously it will vary from image to image. And this wouldn't work with images that are less than (256,256).

So my question is this, what approach should I follow to make my HSI images in standard format and create a dataset?

If any related research paper can be suggested which explains about it, that would be of great help as well. Thank you.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source