'Pytorch preferred way to copy a tensor

There seems to be several ways to create a copy of a tensor in Pytorch, including

y = tensor.new_tensor(x) #a

y = x.clone().detach() #b

y = torch.empty_like(x).copy_(x) #c

y = torch.tensor(x) #d

b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable.

Any reasons for/against using c?



Solution 1:[1]

According to Pytorch documentation #a and #b are equivalent. It also say that

The equivalents using clone() and detach() are recommended.

So if you want to copy a tensor and detach from the computation graph you should be using

y = x.clone().detach()

Since it is the cleanest and most readable way. With all other version there is some hidden logic and it is also not 100% clear what happens to the computation graph and gradient propagation.

Regarding #c: It seems a bit to complicated for what is actually done and could also introduces some overhead but I am not sure about that.

Edit: Since it was asked in the comments why not just use .clone().

From the pytorch docs

Unlike copy_(), this function is recorded in the computation graph. Gradients propagating to the cloned tensor will propagate to the original tensor.

So while .clone() returns a copy of the data it keeps the computation graph and records the clone operation in it. As mentioned this will lead to gradient propagated to the cloned tensor also propagate to the original tensor. This behavior can lead to errors and is not obvious. Because of these possible side effects a tensor should only be cloned via .clone() if this behavior is explicitly wanted. To avoid these side effects the .detach() is added to disconnect the computation graph from the cloned tensor.

Since in general for a copy operation one wants a clean copy which can't lead to unforeseen side effects the preferred way to copy a tensors is .clone().detach().

Solution 2:[2]

Pytorch '1.1.0' recommends #b now and shows warning for #d

Solution 3:[3]

One example to check if the tensor is copied:

import torch
def samestorage(x,y):
    if x.storage().data_ptr()==y.storage().data_ptr():
        print("same storage")
    else:
        print("different storage")
a = torch.ones((1,2), requires_grad=True)
print(a)
b = a
c = a.data
d = a.detach()
e = a.data.clone()
f = a.clone()
g = a.detach().clone()
i = torch.empty_like(a).copy_(a)
j = torch.tensor(a) # UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).


print("a:",end='');samestorage(a,a)
print("b:",end='');samestorage(a,b)
print("c:",end='');samestorage(a,c)
print("d:",end='');samestorage(a,d)
print("e:",end='');samestorage(a,e)
print("f:",end='');samestorage(a,f)
print("g:",end='');samestorage(a,g)
print("i:",end='');samestorage(a,i)

Out:

tensor([[1., 1.]], requires_grad=True)
a:same storage
b:same storage
c:same storage
d:same storage
e:different storage
f:different storage
g:different storage
i:different storage
j:different storage

The tensor is copied if the different storage shows up. PyTorch has almost 100 different constructors, so you may add many more ways.

If I would need to copy a tensor I would just use copy(), this copies also the AD related info, so if I would need to remove AD related info I would use:

y = x.clone().detach()

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 macharya
Solution 3 prosti