'one of the variables needed for gradient computation has been modified by an inplace operation.What is meant by versions in the output?

I got the following error while using .backward():

Runtime error : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [84, 1]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

My code:

               for epoch in range(start_epoch,epochs):
                  for i, data in enumerate(loader,start=epoch_iter):
                    total_steps += batch_size
                    epoch_iter += batch_size
                    source_list,target_list,face_swapped_list=data
                    
                    real=target_list
                    
                    #Training Discriminator
                    fake=gen(source_list,face_swapped_list)
                    disc_real = disc(real).reshape(-1)
                    lossD_real = criterion(disc_real, torch.ones_like(disc_real)) #loss in wrongly classifying real images
                    #disc_fake = disc(fake)
                    disc_fake = disc(fake.detach()).reshape(-1)
                    lossD_fake = criterion(disc_fake, torch.zeros_like(disc_fake)) #loss in wrongly classifying fake images
                    Loss_D = (lossD_real + lossD_fake) / 2
                    
                    #Training Generator
                    output = disc(fake).reshape(-1)
                    lossG = criterion(output, torch.ones_like(output))

                    p_loss=[]
                    f_loss=[]
                    for i in range(0,batch_size):
                      real_image=target_list[i]
                      fake_image=fake[i]
                      p_loss.append(pixel_level_loss(real_image,fake_image))
                      f_loss.append(feature_level_loss(real_image,fake_image))
                    
                    pixel_loss=np.mean(p_loss)
                    feature_loss=np.mean(f_loss)
                  
                  
                    Loss_G=pixel_loss + 10.0 * feature_loss + 1.0 * lossG

                    #backward pass
                    with torch.autograd.set_detect_anomaly(True):
                      disc.zero_grad()
                      Loss_D.backward(retain_graph=True)
                      opt_disc.step()

                    with torch.autograd.set_detect_anomaly(True):
                      gen.zero_grad()  
                      Loss_G.backward()
                      opt_gen.step()

The error comes at Loss_G.backward().What does version2 and version1 mean in the error? Any suggestions to find out the layer of deep neural network in which the error is occuring and rectify it?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source