'How to save checkpoints for thie transformer gpt2 to continue training?

I am retraining the GPT2 language model, and am following this blog :

https://towardsdatascience.com/train-gpt-2-in-your-own-language-fc6ad4d60171

Here, they have trained a network on GPT2, and I am trying to recreate a same. However, my dataset is too large(250Mb), so I want to continue training in intervals. In other words, I want to checkpoint the model training. If there is any help, or a piece of code that I can implement to checkpoint and continue training, it would help a great deal for me. Thank you.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source