'serial port: control, predict or at least optimize read() latency
I am designing application running on ARM9 working through serial port using Modbus. You may know that Modbus protocol is based on timing.
Originally I open port in non-blocking mode and use polling through read(). Then I learned, while it seems to work, it is not best, or even not a good solution for this environment. I have seen my thread execution "holes" of up to 60 ms (yes, milliseconds), and it is too much. I do not know if my measurements are correct - this is what I see on the screen, and it is not actually a question here.
I have learned there're a number of ways doing "high level" reading differently:
- use another way of polling, e.g. epoll
epoll_wait; - open serial port in blocking mode, and, in another thread, measure the time while
read()is waiting for the data (e.g. timer somehow connected to the signal).
However, as I was told, the nature of Linux by default is not real-time, and nothing is guaranteed.
I am looking for advice and information if there're any hacks to design read() getting characters received from UART through all the software layers as quick as possible. Controlled delays up to 1ms would be acceptable (9600 baud).
For example, if I write code in specific way, compiler and target CPU will arrange timing in the way if it sees code loops waiting for some condition, CPU will turn away to another threads, but as soon as this thread's condition is met (no idea how - interrupt? watcher?) it switches to this thread as soon as it can and proceeds with it.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
