'Can Tensors on different devices add together?
I found a curious thing recently. As far as I know, when you want to do some operations on two tensors, you should make sure that they are on the same device. But when I write my code like this, it runs unexpectly
import torch
a = torch.tensor(1, device='cuda')
print(a.device)
b = torch.tensor(2, device='cpu')
print(b.device)
torch(a+b)
cuda:0
cpu
tensor(3, device='cuda:0')
And it can't work in my code like this:
pts_1_tile = torch.tensor([[0], [0]], dtype=torch.float32)
torch.add(pred_4pt_shift, pts_1_tile)
here pred_4pt_shift is an intermediate result of a sub-Net, and it is a tensor on GPU.
My question is that why the first code can work but the second one reports this different device error?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|

