'How to run these Coral AI models inference in a computer rather than on the TPU?

I have the Coral AI usb TPU and I have succesfully run the Getting Started example, deploying the already compiled / trained example model (image classification), with a parrot image running inference on the TPU:

python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg

However, I would like to run inference of this same model in a computer CPU (say, my laptop, or a Raspberry Pi, for example) to compare the times that it takes to run the inference in an accelerator like the Coral AI vs a general purpose CPU.

If my understanding is correct, the example, mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite is a file which contains a TF Lite model, which is quantified, and compiled for the edge tpu (which I recall reading only has support for 8-bit models, or something among those lines).

Where can I find this model without being compiled? How can I compile it for my PC (Linux distribution Ubuntu, or Raspbian on the Raspberry Pi) and run inference on the CPU?

All I could find was the following files as well: https://github.com/google-coral/edgetpu/tree/master/test_data specifically mobilenet_v2_1.0_224_inat_bird_quant.tflite which seems to be the same but not compiled for the coral. Is this model (which seems to be quantified) suitable to be compiled for the CPU? How to do so?

Thank you



Solution 1:[1]

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Coral So Support