'CUDA error: device-side assert triggered on Colab

I am trying to initialize a tensor on Google Colab with GPU enabled.

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

t = torch.tensor([1,2], device=device)

But I am getting this strange error.

RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1

Even by setting that environment variable to 1 seems not showing any further details.
Anyone ever had this issue?



Solution 1:[1]

While I tried your code, and it did not give me an error, I can say that usually the best practice to debug CUDA Runtime Errors: device-side assert like yours is to turn collab to CPU and recreate the error. It will give you a more useful traceback error.

Most of the time CUDA Runtime Errors can be the cause of some index mismatching so like you tried to train a network with 10 output nodes on a dataset with 15 labels. And the thing with this CUDA error is once you get this error once, you will recieve it for every operation you do with torch.tensors. This forces you to restart your notebook.

I suggest you restart your notebook, get a more accuracate traceback by moving to CPU, and check the rest of your code especially if you train a model on set of targets somewhere.

Solution 2:[2]

As the other respondents indicated: Running it on CPU reveals the error. My target labels where {1,2} I changed them to {0,1}. This procedure solved it for me.

Solution 3:[3]

1st time:

Got the same error while using simpletransformers library to fine-tuning transformer-based model for multi-class classification problem. simpletransformers is a library written on the top of transformers library.

I changed my labels from string representations to numbers and it worked.

2nd time:

Face the same error again while training another transformer-based model with transformers library, for text classification. I had 4 labels in the dataset, named 0,1,2, and 3. But in the last layer (Linear Layer) of my model class, I had two neurons. nn.Linear(*, 2)* which I had to replace by nn.Linear(*, 4) because I had total four labels.

Solution 4:[4]

I am a filthy casual coming from the VQGAN+Clip "ai-art" community. I get this error when I already have a session running on another tab. Killing all sessions from the session manager clears it up, and let's you connect with the new tab, which is nice if you have fiddled with a lot of settings you don't want to loose

Solution 5:[5]

I also encountered this problem and found the reason, because the vocabulary dimension is 8000, but the embedding dimension in my model is set to 5000

Solution 6:[6]

Maybe, I mean in some cases

It is due to you forgetting to add a sigmoid activation before you send the logit to BCE Loss.

Hope it can help :P

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 thepurpleowl
Solution 2 tschomacker
Solution 3
Solution 4 Wilson Westbrook
Solution 5 yun li
Solution 6 Tiffany Zhao