'Exception occurs in for statement itself

I have an exception occurring in a for statement:

for _, data in enumerate(dataloader, 0):

Not in the body of the for statement, but in the for statement itself. How do I catch this and continue?

Here is the entire error trace:

Traceback (most recent call last):
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/reprex/run_DL.py", line 67, in <module>
    ut.generate_validation_model(cfg)
  File "/panfs/roc/groups/4/miran045/reine097/projects/AlexNet_Abrol2021/reprex/utils.py", line 227, in generate_validation_model
    loss = train(trainloader, net, optimizer, criterion, cfg.cuda_avl)
  File "/panfs/roc/groups/4/miran045/reine097/projects/AlexNet_Abrol2021/reprex/utils.py", line 96, in train
    for _, data in enumerate(dataloader, 0):
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
    return self._process_data(data)
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
    data.reraise()
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/_utils.py", line 434, in reraise
    raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
    return self.collate_fn(data)
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 84, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 84, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 64, in default_collate
    return default_collate([torch.as_tensor(b) for b in batch])
  File "/home/miran045/reine097/projects/AlexNet_Abrol2021/venv/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [1, 208, 300, 320] at entry 0 and [1, 320, 300, 208] at entry 13

The error occurs on this line:

  File "/panfs/roc/groups/4/miran045/reine097/projects/AlexNet_Abrol2021/reprex/utils.py", line 96, in train
    for _, data in enumerate(dataloader, 0):


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source