'tflite inference using cpp
I am trying to perform inference on an image using tflite model in cpp. The resources that I have found till now concerns classification problems and I am unable to apply the logic in my problem statement.
My problem statement is given an image, the model will fill in proper lighting to the image. The model can be found here or here. It is a tflite model which accepts image with dimension [1, 160, 160, 3]. I am able to run the script properly in python but since I am new to cpp, I am not able to reproduce the code in cpp.
Here's the python snippet:
image_raw = tf.io.read_file(img_path)
# print(image_raw)
original_image = tf.io.decode_image(image_raw, channels=3)
original_image = tf.cast(original_image, dtype=tf.float32)
original_image = original_image/255.0
resized_image = tf.image.resize(original_image, [img_h, img_w])
resized_image = tf.expand_dims(resized_image, axis=0)
interpreter.set_tensor(input_details[0]['index'], resized_image.numpy())
interpreter.invoke()
a_maps = tf.cast(interpreter.get_tensor(output_details[1]['index']), tf.float32)
enhanced_img = tf.cast(interpreter.get_tensor(output_details[0]['index']),
tf.float32)
if original_image.shape.rank == 4:
original_image = tf.squeeze(original_image, axis=0)
if alpha_maps.shape.rank == 4:
alpha_maps = tf.squeeze(alpha_maps, axis=0)
# get original image height and width
h, w, _ = original_image.shape
# Resize alpha maps to original image size
a_maps = tf.image.resize(alpha_maps, [h,w], method=tf.image.ResizeMethod.BICUBIC)
# a_maps = (a_maps-1)/2
for _ in range(iteration):
original_image = original_image + (a_maps)*(tf.square(original_image) - original_image)
ehnanced_original_image = tf.cast(original_image*255, dtype=tf.uint8)
ehnanced_original_image = tf.clip_by_value(ehnanced_original_image, 0, 255)
Till now I have been able to read in the model and feed in the input image with resizing and I think I have been to get the image to 4 dimensions as required, not sure though. Below is the cpp snippet:
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model.get(), resolver)(&interpreter);
if (interpreter == nullptr)
{
fprintf(stderr, "Failed to initiate the interpreter\n");
exit(-1);
}
printf("INTERPRETER LOADED SUCCESSFULLY\n");
if (interpreter->AllocateTensors() != kTfLiteOk)
{
fprintf(stderr, "Failed to allocate tensor\n");
exit(-1);
}
cv::Mat image;
auto frame = cv::imread(imageFile);
frame = frame/255.0;
cv::resize(frame, image, cv::Size(width, height), cv::INTER_NEAREST);
cv::Mat img;
// std::cout << arr.size;
int siz[] = {1, 160, 160, 3};
image.convertTo(img, 4);
memcpy(interpreter->typed_input_tensor<float>(0), img.data, img.total() * img.elemSize());
interpreter->Invoke();
Now, I am unable to extract my image from the output tensors. Please help me out, I am stuck on this for the last 1 week.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
