'How to clear GPU memory WITHOUT restarting runtime in Google Colaboratory (Tensorflow)

I want to run hyperparameter tuning for a Neural Style Transfer algorithm which results in having a for-loop in which my model outputs an image generated with different hyperparameters per iteration.

It is running in Google Colaboratory using GPU runtime. At runtime, I get at some point an error that says that my GPU memory is almost full and then the program stops.

So I was thinking maybe there is a way to clear or reset the GPU memory after some specific number of iterations so that the program can normally terminate (going through all the iterations in the for-loop, not just e.g. 1500 of 3000 because of full GPU memory)

I already tried this piece of code which I find somewhere online:

# Reset Keras Session
def reset_keras():
    sess = get_session()
    clear_session()
    sess.close()
    sess = get_session()

    try:
        del classifier # this is from global space - change this as you need
    except:
        pass

    #print(gc.collect()) # if it's done something you should see a number being outputted

    # use the same config as you used to create the session
    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 1
    config.gpu_options.visible_device_list = "0"
    set_session(tf.Session(config=config))


Solution 1:[1]

You may run the command "!nvidia-smi" inside a cell in the notebook, and kill the process id for the GPU like "!kill process_id". Try using simpler data structures, like dictionaries, vectors.

if you are using pytorch, run the command torch.cuda.clear_cache

Solution 2:[2]

If you're using torch, torch.cuda.empty_cache() would be the way, which should follow an nvidia-smi to make sure you can't directly kill the process.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 M.Innat
Solution 2 Ari