'Why the accuracy and loss number is different compared with the train phase if I load a trained model?

Normally I would use the code torch.save(model.state_dict(), 'test.pth') to save the best model based on the performance of the validation set.

In the training phase, I print the loss and Accuracy in the last epoch and I got Loss:0.38703016219139097 and Accutacy:86.9.

However, When I load the model which I just got from the training phase to print the loss and Accuracy, I would get the same Accuracy and different loss: 0.38702996191978456.

Why would that happen? I try different datasets and neural networks, but get the same result.



Solution 1:[1]

If I've understood correctly, at the end of each epoch, you print the training accuracy/loss, and also save the model if it beats the current best model on the validation set. Is that it?

Because if my understanding of the situation is correct, then it is perfectly normal. Your "best" model in regards of the TRAINING accuracy/loss is under no obligation to also be the best in regards of the VALIDATION accuracy/loss. (One of the best examples of this is the overfitting phenomenon)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 desertnaut