'Restart Colab kernel after each iteration while training neural network

I am trying to build a Character-level recurrent sequence-to-sequence model. I am trying to tune epochs and batch size

    for epoch in [50,100]:
       for batch_size in [8,32,64]:
          history = model.fit(
          [encoder_input_data, decoder_input_data],
          decoder_target_data,
          batch_size=batch_size,
          epochs=epoch,
          validation_split=0.2, verbose = 0
       )
       print("--------------------------------------------------------------------")
       print(f"Training model with epochs {epoch} and batch size {batch_size}")
       print(f"Validation Accuracy with epochs: {epoch} and batch size {batch_size} is : {np.average(history.history['val_categorical_accuracy'])}" )
       plt.plot(history.history['loss'])
       plt.plot(history.history['val_loss'])
       plt.title('Loss vs Epochs')
       plt.ylabel('Loss')
       plt.xlabel('Epoch')
       plt.legend(['Training Loss', 'Validation Loss'], loc='upper left')
       plt.show()

I am getting the below plots for the 2 iterations that I ran.

Loss vs epochs

I don't understand the second plot. It should look like first plot but different values. Is there any data leakage or do I have to reset the kernel after every iteration and how do I do that?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source