'TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32)

I was creating a mobile app that can recognize images using flutter and I got this runtime error when testing the app.

Caused by: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F (which is compatible with the TensorFlowLite type FLOAT32).

I train my custom model using TensorFlow lite Image classification and run it on Google Colab



Solution 1:[1]

After reading some documents. The TensorFlow Lite Model should have a Post-Training quantization. See here at Image Classification with TensorFlow Lite

Run this command to define the quantization configuration.

config = QuantizationConfig.for_float16()

Then export the TensorFlow Lite Model with the configuration.

model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)

Export the TensorFlow Lite Model Label with the configuration

model.export(export_dir='.', quantization_config=config,export_format=ExportFormat.LABEL)

Solution 2:[2]

inside your code change the output from float to byte and finally get the float value from byte data.

before:

float[][] labelProb = new float[1][labels.size()];
 for (int i = 0; i < labels.size(); ++i) {
      float confidence = labelProb[0][i];
  }

after:

byte[][] labelProb = new byte[1][labels.size()];
 for (int i = 0; i < labels.size(); ++i) {
      float confidence = (float)labelProb[0][i];
  }

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Hassan El khalifte