'TFLite with tflite_model_maker: RuntimeError: Given shapes, [1, 2, 6, 224] and [1, 10, 10, 224], are not broadcastable
I have trained an model using the tflite_model_makerlike shown in the official tutorial.
All in all, the code is very compact:
MODEL_SPEC_STR = 'efficientdet_lite4'
NUM_EPOCHS = 50
[...]
logger.info('Getting spec={}')
spec = model_spec.get(MODEL_SPEC_STR, verbose=1, debug=False, tf_random_seed=1)
logger.info('Constructing dataloader')
train_data, validation_data, test_data = object_detector.DataLoader.from_csv(PATH_TO_CSV)
logger.info('Starting training..')
model = object_detector.create(
train_data, model_spec=spec, batch_size=BATCH_SIZE, train_whole_model=True, validation_data=validation_data, epochs=NUM_EPOCHS
)
logger.info(model.summary())
logger.info('Starting test..')
res = model.evaluate(test_data, batch_size=1)
logger.info('Test res={}'.format(res))
logger.info('Saving tflite model..')
model.export(
export_dir=MODEL_SAVE_FOLDER, tflite_filename='efficientDet4FP16T1Epoch.tflite',
export_format=[ExportFormat.TFLITE]
)
logger.info('Finished.')
with a BATCH_SIZE > 1 and everything went well indicated by the log entry:
Test res={'AP': 0.92035127, [...]
Now I want to use the detector to infer single images. Therefore, I followed the tutorial and the code looks like:
interpreter = tf.lite.Interpreter(model_path=model_path)
[...]
for image_path in image_paths:
preprocessed_image, original_image, width, height = preprocess_image(image_path)
interpreter.resize_tensor_input(0, [1, width, height, 3], strict=True)
interpreter.allocate_tensors()
[...]
Now I'll get an error calling interpreter.resize_tensor_input.
The error is (obviously) connected to the parameters:
interpreter.resize_tensor_input(0, [1, width, height, 3], strict=False):
RuntimeError: Given shapes, [1, 6, 2, 224] and [1, 10, 10, 224], are not broadcastable.Node number 314 (ADD) failed to prepare. 2.
interpreter.resize_tensor_input(0, [1, height, width, 3], strict=False): RuntimeError: Given shapes, [1, 2, 6, 224] and [1, 10, 10, 224], are not broadcastable.Node number 314 (ADD) failed to prepare. 3.interpreter.resize_tensor_input(0, [1, width, height, 3], strict=True): RuntimeError: Attempting to resize dimension 1 of tensor 0 with value 640 to 376. ResizeInputTensorStrict only allows mutating unknown dimensions identified by -1. 4.interpreter.resize_tensor_input(0, [1, height, width, 3], strict=True): RuntimeError: Attempting to resize dimension 1 of tensor 0 with value 640 to 114. ResizeInputTensorStrict only allows mutating unknown dimensions identified by -1.
Obviously 1. and 2. as well as 3. and 4. only differ regarding the specific dimensions value.
What I do not get is, why the model.evaluate(test_data, batch_size=1) call works while the described manual iteration doesn't, even though the images are the same (ones).
Does anyone has a clue why this is happening?
Update
I changed the save format to ExportFormat.SAVED_MODEL and use a converter:
converter = tf.lite.TFLiteConverter.from_saved_model(model_path)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
tflite_model = converter.convert()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
With this config, I'm getting another error:
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
Which is probably caused by:
tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1892] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s): Flex ops: FlexConv2D, FlexTensorListConcatV2, FlexTensorListReserve, FlexTensorListSetItem
Which I though I'm doing using converter.target_spec.supported_ops.
Does any known how to fix this? (Or the previous one ;) )
Update 2
The error above seems to be cause by not supported operations of the tflite lib, because the mentioned ops (e.g. FlexConv2D) are not listed in the supportet operations list, which is confusing because I initially created the modell using the tflite_model_maker.. .
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
