'Wrong grayscale convertion algorithm in torchvision?

I have the dataset of color images, that I want to convert to grayscale and use them to train my CNN. Should I use the values for all the color channels equally translated to the grayscale value, or should there be a different ratio between them?

In torchvision Grayscale the formula is: L = R * 0.2989 + G * 0.5870 + B * 0.1140. And that are the ratios for RGB recommended by BT.601 standard based on the ideas of how human eye works.

But it seems that having different ratios for different colors is senseless if we train the neural network in grayscale: We adjust grayscale images to the eye's perception, but there is no real eye when the neural network analyzes camera sensor information.

Aren't we losing some useful data while converting color images to grayscale using color ratios and not managing them equally?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source