'Can pytorch compute gradients for composite tensors?

Suppose I have a parameter p and a big tensor a initialized as:

a = torch.empty(size=[50] + list(p.shape))

I then fill a as follows:

for i in range(a.shape[0]):
    a[i] = torch.pow(p, i) #more complex computation here.

EDIT

To add clarity, I will make the computation a bit more explicit:

Suppose I have a torch module net, I compute a as follows:

import torch.nn as nn

p = torch.ones(10)
net = nn.Linear(p.shape[0], p.shape[0])

a[0] = p
for i in range(a.shape[0]):
    a[i] = net(a[i-1])

I then use a to compute my loss, for example:

loss = a.sum()
loss.backward()

Can pytorch compute the gradients through a despite the different computation paths for its subtensors?

What about if I were to use torch.stack on a list of the tensors obtained in the loop.



Solution 1:[1]

As far as I can see from your example, you could initialize a directly without requiring an intermediate tensor to be created or a for loop to be used.
You can leverage torch.arange to do so:

>>> a = p**torch.arange(len(p))

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Ivan