'CUDA error when using a spacy3 gpu model, but CUDA works fine with pytorch on the same system
I'm running a docker with gpu enabled (nvidia-docker), and CUDA 11.3 installed, as well as cupy.
I can use torch models with no issue. I can see that my gpu is used as it should be.
In python, spacy loads fine, the gpu seems activated, and I can load models with no problem:
>>> import spacy
>>> spacy.require_gpu()
True
>>> spacy.load("en_core_web_trf")
<spacy.lang.en.English object at 0x7efb7ddd6fd0>
But when I try to actually use these models to analyze some text, I get this error:
CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
From what I read in other posts this error is supposed to appear when you run out of memory, but in this case it happens immediatly, the gpu stays at 0% usage as far as I can tell.
Here is the full exception I get: https://pastebin.com/x8kECw7p.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
