'Best way to feed data to tflite interpreter in ARM linux
I have a Python application running on an ARM Linux device. The application continuously acquires images and feeds them to a Tensorflow lite model. This is the relevant portion of the code:
import tflite_runtime.interpreter as tflite
import numpy as np
interpreter = tflite.Interpreter(model_path)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
while True:
img = acquire_img() # 3D numpy array
img = np.expand_dims(img, 0)
interpreter.set_tensor(input_details[0]['index'], img)
interpreter.invoke()
output = interpreter.get_tensor(output_details[0]['index'])
I was reading the python API documentation in order to check for possible optimizations. The description of the set_tensor function is:
Sets the value of the input tensor. Note this copies data in value. If you want to avoid copying, you can use the tensor() function to get a numpy buffer pointing to the input buffer in the tflite interpreter.
The description of the tensor function is:
Returns function that gives a numpy view of the current tensor buffer. This allows reading and writing to this tensors w/o copies. This more closely mirrors the C++ Interpreter class interface's tensor() member, hence the name. Be careful to not hold these output references through calls to allocate_tensors() and invoke(). This function cannot be used to read intermediate results.
Notice how this function avoids making a numpy array directly. This is because it is important to not hold actual numpy views to the data longer than necessary. If you do, then the interpreter can no longer be invoked, because it is possible the interpreter would resize and invalidate the referenced tensors. The NumPy API doesn't allow any mutability of the the underlying buffers.
Returns A function that can return a new numpy array pointing to the internal TFLite tensor state at any point. It is safe to hold the function forever, but it is not safe to hold the numpy array forever.
I am a bit confused: could I have some advantages in switching from set_tensor() to tensor()? Is it feasible in my case?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
