'Onnx to tensorrt on Jetson Nano with CUDA 10.0

I'm using Jetson Nano to implement the repo. called tensorrt_demos as it was mentioned before.

But when I run the python3 onnx_to_tensorrt.py -m yolov4-416 . The error message comes up

python3: ../builder/cudnnBuilderGraphNodes.cpp:229: virtual nvinfer1::builder::Format::Type nvinfer1::builder::PluginV2Node::getUniformFormats() const: Assertion f != Format::kNONE && "PluginNode supported formats do not match any expected format."' failed.

BTW, I choose to use jetson nano developer kit, and it already has the TensorRT (libnvinfer.so.5.1.6) And my CUDA version is 10.0. It's because the CUDA version isn't supported for the repo. ?

Because the Jetpack originally set the CUDA and TensorRT as these version.

So I think that may not be origin about the error message.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source