'.float() in pytorch changes the value of an int [duplicate]
import torch
torch.set_printoptions(precision=1, sci_mode=False)
numeric_seq_id = 2021080918959999952
t = torch.tensor(numeric_seq_id)
tt = torch.tensor(numeric_seq_id).float() # !!!
print(t, tt)
output is
tensor(2021080918959999952) tensor(2021080905052848128.)
We could see that tt's value is changed after .float() transform.
Why is here such a difference on the value?
ps. pytorch's version = 1.10.1
python's version = 3.8
Solution 1:[1]
This is not pytorch specific, but an artifact of how floats (or doubles) are represented in memory (see this question for more details), which we can also see in numpy:
import numpy as np
np_int = np.int64(2021080918959999952)
np_float = np.float32(2021080918959999952)
np_double = np.float64(2021080918959999952)
print(np_int, int(np_float), int(np_double))
Output:
2021080918959999952 2021080905052848128 2021080918960000000
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
