'Model returns only NaN values on GTXA5000 but not on 1080TI
I have replaced a GTX 1080TI graphics card with a GTX A5000 in a desktop machine and reinstalled Ubuntu to upgraded from 16.04 to 20.04 in order to meet requirements. But now I can't retrain or predict with our current model; When loading the model, Keras hangs for a very long time and all predicted results are NaN values. We use Keras 2.2.4 with tensorflow 2.1.0 and Cuda 10.1.243, which I installed using Conda and I have tried with different drivers.
If I put the old GTX 1080 TI card back in to the machine the code works fine.
Any idea of what can be wrong - can it be the case that the A5000 does not support the same models as an old 1080TI card?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|

