'What is the right way to measure server/client latency (TCP & UDP)

I am sending and recieving packets between two boards (a Jeston and a Pi). I tried using TCP then UDP, theoritically UDP is faster but I want to verify this with numbers. I want to be able to run my scripts, send and recieve my packets while also calculating the latency. I will later study the effect of using RF modules instead of direct cables between the two boards on the latency (this is another reason why I want the numbers).

What is the right way to tackle this?

I tried sending the timestamps to get the difference but their times are not synched. I read about NTP and Iperf but I am not sure how they can be run within my scripts. iperf measures the trafic but how can that be accurate if your real TCP or UDP application is not running with real packets being exchanged?



Solution 1:[1]

It is provably impossible to measure (with 100% accuracy) the latency, since there is no global clock. NTP estimates it by presuming the upstream and downstream delays are equal (but actually upstream buffer delay/jitter is often greater).

UDP is only "faster" because it does not use acks and has lower overhead. This "faster" is not latency. Datacam "speed" is a combo of latency, BW, serialization delay (time to "clock-out" data), buffer delay, pkt over-head, and sometimes processing delay, and/or protocol over-head.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Andrew