'How can I use pytorch to optimize energy function using 2d array?
I want to use pytorch to minimize energy function.
psi is array with two 2d arrays with same size.
img, latent, omega are 2d arrays with same size.
Elements of psi have same size too.
class psiModel(nn.Module):
def __init__(self, psi, img, latent, omega):
super().__init__()
weight_x = psi[0]
weight_y = psi[1]
pad_img = padding(img, 1)
pad_lat = padding(latent, 1)
self.img = torch.Tensor(pad_img)
self.latent = torch.Tensor(pad_lat)
self.omega = torch.Tensor(omega)
self.sx = torch.Tensor([[1, 0, -1], [2, 0, -2], [1, 0, -1]])
self.sy = torch.Tensor([[1, 2, 1], [0, 0, 0], [-1, -2, -1]])
eq_x = torch.zeros(size=img.shape)
eq_y = torch.zeros(size=img.shape)
for i in range(len(eq_x)):
for j in range(len(eq_x[i])):
eq_x[i][j] = eq(weight_x[i][j])
eq_y[i][j] = eq(weight_y[i][j])
self.eq_x = eq_x
self.eq_y = eq_y
self.weight_x = nn.Parameter(
torch.Tensor(weight_x), requires_grad=True)
self.weight_y = nn.Parameter(
torch.Tensor(weight_y), requires_grad=True)
def forward(self):
psi_x = self.weight_x
psi_y = self.weight_y
lam1x = self.eq_x
lam1x = torch.norm(lam1x, p=1)
lam1y = self.eq_y
lam1y = torch.norm(lam1y, p=1)
lam2x = psi_x - conv2dt(self.img, self.sx)
lam2x = torch.mul(lam2x, self.omega)
lam2x = LAMBDA2 * torch.norm(lam2x).item() ** 2
lam2y = psi_y - conv2dt(self.img, self.sy)
lam2y = torch.mul(lam2y, self.omega)
lam2y = LAMBDA2 * torch.norm(lam2y).item() ** 2
gamx = psi_x - conv2dt(self.latent, self.sx)
gamx = GAMMA * torch.norm(gamx).item() ** 2
gamy = psi_y - conv2dt(self.latent, self.sy)
gamy = GAMMA * torch.norm(gamy).item() ** 2
ret = lam1x + lam1y + lam2x + lam2y + gamx + gamy
return ret
My code seems to be complex, so let me explain.
psi[0] and psi(1) are what I want to modify.
Whole formular I want to minimize is
Psi_x and psi_y are to be modified.
I think other terms are not important to minimize with pytorch.
Anyway, I set my self.weight_x and self.weight_y with psi In forward, I use them to calculate the formula.
def training_loop_psi(model, optimizer, n=15):
loss_fn = nn.MSELoss()
for param in model.parameters():
param.requires_grad_(True)
for i in range(n):
preds = model()
loss = loss_fn(torch.Tensor([preds]), torch.Tensor([0]))
loss.requires_grad_(True)
loss.backward()
optimizer.step()
optimizer.zero_grad()
loss.requires_grad_(False)
ret = []
for param in model.parameters():
param.requires_grad_(False)
ret.append(param.data)
return ret
This is the function that calls the model. To minimize the formular, I compared preds with 0.
Optimizing operation should be called several times so I turned off requires_grad after optimizing.
However, I get the same results so I think that optimizing is not working well.
Is there any way to make my function work well?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
