'Dimension mismatch during Keras to ONNX conversion (2D output)
I am observing a dimension mismatch in Keras to ONNX conversion. I saved my model as a .h5 file. It can successfully be saved and loaded again. However, when converting it to an ONNX model, I get different output dimensions.
I think I experience this due to 2D output, because one of my output dimension is simply disappeared.
Loading Keras model...
>>> keras_model = load_model('model_checkpoints/DGCNN_modelbest_with_noise.h5')
>>> keras_output = keras_model.output
>>> keras_output
<tf.Tensor 'dense_2/truediv_5:0' shape=(None, 432, 5) dtype=float32>
Converting Keras model to ONNX...
>>> input_keras_model = 'model_checkpoints/DGCNN_modelbest_with_noise.h5'
>>> output_onnx_model = 'model_checkpoints/DGCNN_modelbest_with_noise.onnx'
>>> keras_model = load_model(input_keras_model)
>>> onnx_model = onnxmltools.convert_keras(keras_model)
>>> onnxmltools.utils.save_model(onnx_model, output_onnx_model)
Loading ONNX model...
>>> model = onnx.load("model_checkpoints/DGCNN_modelbest_with_noise.onnx")
>>> for _output in model.graph.output:
... m_dict = MessageToDict(_output)
... dim_info = m_dict.get("type").get("tensorType").get("shape").get("dim")
... output_shape = [d.get("dimValue") for d in dim_info]
... print(m_dict["name"])
... print(output_shape)
...
dense_2
[None, None, '5']
Any suggestions? What am I doing wrong? I don't see many examples for multidimensional output layers. Is this the reason?
Thank you for your time.
Solution 1:[1]
I have no problem following the example I try by loading and run it still have the same results but I using the pdb format. The pdb format is a molecular format that includes sutures and using from model.save( ... )
( 1 ) : Save and convert
import tensorflow as tf import tf2onnx import onnx
model = tf.keras.Sequential() #model.add(tf.keras.layers.InputLayer(input_shape=(1, 100, 100, 3), name='DekDee Input')) model.add(tf.compat.v1.layers.dense(4, activation="relu", name='output1'))
??? Name specific and types is significant
input_signature = [tf.TensorSpec([3, 3], tf.float32, name='input1')]
#Use from_function for tf functions
onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=13) onnx.save(onnx_model, "F:\models\onnx\model.onnx") OR model.save("F:\models\onnx\modelpb")
??? Command : python -m tf2onnx.convert --saved-model "F:\models\onnx\modelpb" --output "F:\models\onnx\model_2.onnx" --opset 13
( 2 ) : Load and run
import onnxruntime as ort import numpy as np import tensorflow as tf
Change shapes and types to match model
input1 = np.zeros((3, 3), np.float32) sess = ort.InferenceSession("F:\models\onnx\model.onnx", providers=["CUDAExecutionProvider"])
results_ort = sess.run(["output1"], {"input1": input1})
F:\temp\Python\tf_onnx>python onnx_verification_test_2.py
[array([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]], dtype=float32)]
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Jirayu Kaewprateep |

