'How to handle receiving real time signals and passing signals into inference engine on Jetson Platforms
I am just looking for some advice on how to handle receiving a large number of signals in real time, storing these on a buffer and then passing the buffer to be processed by a inference engine/model on Jetson(l4t) platforms.
Currently I have something along the lines of
while True:
for _ in read_x_times:
temp_buffer = read_subset_of_samples_from_input_stream
model_buffer[start:stop] = process(temp_buffer)
if model_buffer_is_full:
inference_engine.execute_v2(model_buffers)
The issue I have is that when the model_buffer is full and I call inference_engine, I am losing some samples from the input stream. The model_buffer lives on shared memory between the gpu and cpu.
What would be the best way to continue to receive samples while the model is processing the model_buffer.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
