'Workaround tf.reshape breaking the flow of the gradient (jacobian)

I have a program in which I'm trying to calculate the jacobian of a neural network, but in order to properly define the jacobian I used tf.reshapeto make the data vectors (as far as I know, jacobian: dy/dx is only defined when y and x are vectors (not matrices nor tensors))
this is my code

@tf.function
def A_calculator():
  with tf.GradientTape(watch_accessed_variables=False) as gtape:
      noise=tf.random.normal([1000, 100])
      gtape.watch(noisex)
      fakenoise=tf.reshape(gen(noise),[1000,-1])
      reshaped_noise=tf.reshape(noise,[1000,-1])
  #caulculate jacobian
  Jz=gtape.batch_jacobian(fakenoise,reshaped_noise)
  return Jz

where genis a neural network that returns an image(generator)
My problem is that Jz is always a tensor with zero as elements
I searched for a solution for this but the closest thing was here(this is what made me suspect that the problem is tf.reshape), but the solution there doesn't solve my problem as I want to do reshape after I insert the value to the functiongen, does anybody know how solve this ? or why Jz always gives a tensor with zero values ?



Solution 1:[1]

Reshaping every tensor is unnecessary, as reshaping (1000,100) tensor to(1000,-1) will result in same shape. Skip reshaping altogether at all stages.

Please check the generator it could take a lot of time to produce the "fakenoise"

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 TFer