'Load onnx model in opencv dnn
I get the following error while loading onnx model in C++ using the given code. What am I doing wrong? Please help me to solve this problem. (opencv version: 4.5.3 both in C++ and Python)
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.1.0) /home/Downloads/opencv-4.1.0/modules/dnn/src/dnn.cpp:524: error: (-2:Unspecified error) Can't create layer "StatefulPartitionedCall/inception_resnet_v1/AvgPool/Mean_Squeeze__1416:0" of type "Squeeze" in function 'getLayerInstance'Aborted (core dumped)
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/dnn/dnn.hpp>
using namespace std;
using namespace cv;
using namespace dnn;
int main()
{
// load the neural network model
cv::dnn::Net net = cv::dnn::readNetFromONNX("model.onnx");
}
I got the model.onnx using the following conversion convert: python3 -m tf2onnx.convert --saved-model /savedmodel/model --output model.onnx
Using the following code in python, I can load the model:
import onnx
onnx_model = onnx.load('model.onnx')
onnx.checker.check_model(onnx_model)
But using cv2.dnn in python, I get the following error
#net = cv2.dnn.readNet('model.onnx')
net = cv2.dnn.readNetFromONNX('model.onnx')
[ERROR:0] global /tmp/pip-req-build-tjxnaiom/opencv/modules/dnn/src/onnx/onnx_importer.cpp (2125) handleNode DNN/ONNX: ERROR during processing node with 3 inputs and 1 outputs: [Concat]:(StatefulPartitionedCall/inception_resnet_v1/Mixed_6a/concat:0) Traceback (most recent call last): File "./use_onnx_in_cv2.py", line 36, in net = cv2.dnn.readNetFromONNX('model.onnx') cv2.error: OpenCV(4.5.3) /tmp/pip-req-build-tjxnaiom/opencv/modules/dnn/src/onnx/onnx_importer.cpp:2146: error: (-2:Unspecified error) in function 'handleNode'
Node [Concat]:(StatefulPartitionedCall/inception_resnet_v1/Mixed_6a/concat:0) parse error: OpenCV(4.5.3) /tmp/pip-req-build-tjxnaiom/opencv/modules/dnn/src/layers/concat_layer.cpp:102: error: (-201:Incorrect size of input array) Inconsistent shape for ConcatLayer in function 'getMemoryShapes'
Solution 1:[1]
If you're using tf2 and if your weights are in .h5 form you will be able to dispose of onnx troubles . you can generate .pb from your .h5 and then easily use in your c++ program.to aim generating .pb use the following code:
after that by using opencv you will be able to import your model and then enjoy!
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.keras.models import load_model
import numpy as np
#path of the directory where you want to save your model
frozen_out_path = '/path.../freez/' # name of the .pb file
frozen_graph_filename = "freez_graph_6cls_try1" #“frozen_graph”
model = load_model("/cls_vgg16_6_cl.h5")
# model = ""# Your model# Convert Keras model to ConcreteFunction
full_model = tf.function(lambda x: model(x))
full_model = full_model.get_concrete_function(
tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))# Get frozen ConcreteFunction
frozen_func = convert_variables_to_constants_v2(full_model)
frozen_func.graph.as_graph_def()
layers = [op.name for op in frozen_func.graph.get_operations()]
print("-" * 60)
print("Frozen model layers: ")
for layer in layers:
print(layer)
print("-" * 60)
print("Frozen model inputs: ")
print(frozen_func.inputs)
print("Frozen model outputs: ")
print(frozen_func.outputs)# Save frozen graph to disk
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
logdir=frozen_out_path,
name=f"{frozen_graph_filename}.pb",
as_text=False)# Save its text representation
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
logdir=frozen_out_path,
name=f"{frozen_graph_filename}.pbtxt",
as_text=True)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
