'Loaded pytorch model gives different results than originally trained model

I trained a Pytorch model, saved the testing error, and saved the complete model using torch.save(model, 'model.pt')

I loaded the model to test it on another dataset and found the error to be higher, so I tested it on the exact same dataset as before, and found the results to be different:

Data

Here, there is not a lot of difference in the predicted values which tells me that the model is correct, but somehow different.

1 difference is that originally the model was trained on GPUs with nn.DataParallel, and while testing after loading, I am evaluating it on CPU.

model = torch.load('model.pt')
model = model.module # To remove the DataParallel module
model.eval()
with torch.no_grad():
    x = test_loader.dataset.tensors[0].cuda()
    pred = model(x)
    mae_loss = torch.nn.L1Loss(reduction='mean')
    mae = mae_loss(pred, y)

What could be causing this difference in model evaluation? Thank you in advance



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source