'Torchvision transforms.ToTensor show different range result. Dose not scale array to [0,1] as document says
I am converting the numpy array to tensor with the following code:
self.transform_1 = transforms.Compose([transforms.ToTensor()])
source_parsing_np = cv2.imread(source_parsing_path, cv2.IMREAD_GRAYSCALE) #The range is integer in the range [0,14]
source_parsing_tensor = self.transform_1(source_parsing_tensor)
As the documentation says, the data will be scaled to [0.0, 1.0]. But in my environment, two different results appeared at different times.
Specifically, in the previous training and testing code and the jupyter notebook code I'm testing now, the tensor results are still integer in the range [0,14]. wrong range result
When I use the same code in test phase again, the data truly be scaled to [0.0, 1.0], which is different from previous train phase. And another numpy array data is with same transform has not been changed(still integer in [0,24]). same transform different result
Because of the above problem I cannot reproduce my model test results. I would be very thankful for any information on this.
Solution 1:[1]
I found the reason: the conversion occurs only when the numpy.ndarray has dtype = np.uint8, but my dytpe is np.long, sorry for my carelessness.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | coldheart |
