'RuntimeError: Found dtype Double but expected Float with Faster RCNN - Pytorch
I am new one for Deep Learning, when I use Faster RCNN on Kaggle to detect my own dataset, I face to this error which occurs on line 18 losses.backward(), and I have tried image.to(torch.float32) and targets.to(torch.float32), it wasn't work, I have no idea on it. Here is my code, please help and thanks alot.
loss_hist = Averager()
itr = 1
for epoch in range(num_epochs):
loss_hist.reset()
for images, targets, image_ids in train_data_loader:
images = list(image.to(device) for image in images)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
loss_dict = frcnn(images, targets)
losses = sum(loss for loss in loss_dict.values())
loss_value = losses.item()
loss_hist.send(loss_value)
optimizer.zero_grad()
losses.backward()
optimizer.step()
if itr % 50 == 0:
print(f"Iteration #{itr} loss: {loss_value}")
itr += 1
# update the learning rate
if lr_scheduler is not None:
lr_scheduler.step()
print(f"Epoch #{epoch} loss: {loss_hist.value}")
and the error is here
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_1418/2816814501.py in <module>
17
18 optimizer.zero_grad()
---> 19 losses.backward()
20 optimizer.step()
21
/opt/conda/lib/python3.7/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
253 create_graph=create_graph,
254 inputs=inputs)
--> 255 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
256
257 def register_hook(self, hook):
/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
147 Variable._execution_engine.run_backward(
148 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
150
151
RuntimeError: Found dtype Double but expected Float
Here is my input
print(images[1])
tensor([[[0.8353, 0.8392, 0.8431, ..., 0.1569, 0.1608, 0.1569],
[0.8392, 0.8431, 0.8431, ..., 0.1686, 0.1686, 0.1647],
[0.8471, 0.8510, 0.8510, ..., 0.1804, 0.1765, 0.1686],
...,
[0.3804, 0.3843, 0.3922, ..., 0.9137, 0.9137, 0.9137],
[0.3882, 0.3922, 0.4039, ..., 0.9098, 0.9098, 0.9176],
[0.4000, 0.4078, 0.4118, ..., 0.9020, 0.9020, 0.9059]]],
device='cuda:0')
print(targets[1])
{'boxes': tensor([[2668.2204, 1033.8983, 2932.6270, 1247.4576]], device='cuda:0',
dtype=torch.float64), 'labels': tensor([2], device='cuda:0'), 'image_id': tensor([52], device='cuda:0'), 'area': tensor([56466.4922], device='cuda:0'), 'iscrowd': tensor([0], device='cuda:0')}
Here is my loss funtion
class Averager:
def __init__(self):
self.current_total = 0.0
self.iterations = 0.0
def send(self, value):
self.current_total += value
self.iterations += 1
@property
def value(self):
if self.iterations == 0:
return 0
else:
return 1.0 * self.current_total / self.iterations
def reset(self):
self.current_total = 0.0
self.iterations = 0.0
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
