'Pytorch view vs unsqueeze when adding a dummy dimension?
I have a pytorch tensor that has shape [n1, n2, n3]. I need to make the shape [n1, n2, n3, 1].
So I know I can use either unsqueeze or view. Is there a difference in what each one would do in this case?
Solution 1:[1]
You can achieve this with four different solutions. There are slight differences between those. More precisely you can:
insert a new singleton dimension with
torch.Tensor.unqueeze:>>> x.unsqueeze(-1) # grad_fn=<UnsqueezeBackward0>use fancy indexing to add a new dimension which is identical:
>>> x[..., None] # grad_fn=<UnsqueezeBackward0>or similarly with
torch.Tensor.view:>>> x.view(*x.shape, 1) # grad_fn=<ViewBackward0>add a new dimension with
torch.Tensor.reshape:>>> x.reshape(*x.shape, 1) # grad_fn=<ReshapeAliasBackward0>
I have added the backward gradient function name as a line comment next to each method. You can see how indexing and unsqueezing are the same, while view and reshape rely on two different methods.
All three methods: indexing, unsqueeze, and view will return a view of the tensor while reshape can return a copy of the tensor if needed (i.e. when the data is not contiguous).
You can read more about the differences between torch.view and torch.reshape on this thread.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Ivan |
