6

What is "interarrival jitter"?

I've noticed it's available in mtr, but the calculated values don't make that much sense — they seem to be much larger than even the max jitter value. What would they be of use for?

Cns# mtr --report{,-cycles=100} --order "SRL BAWV JMXI" … ; date
HOST: Cns???????                    Snt   Rcv Loss%   Best   Avg  Wrst StDev  Jttr Javg Jmax Jint
  1.|-- cnst?????????????????????   100   100  0.0%    8.0   8.7  28.3   2.2   0.3  0.7 19.6  9.4
  2.|-- v320.core1.fra1.he.net      100   100  0.0%    5.3   9.2  22.9   4.5   0.4  4.8 17.6 73.4
  3.|-- 100ge5-2.core1.par2.he.ne   100   100  0.0%   15.0  18.7  41.0   4.6   7.3  4.2 26.0 47.8
  4.|-- 10ge15-1.core1.ash1.he.ne   100    99  1.0%   93.0  99.4 133.3   6.7   0.9  6.0 40.2 125.9
  5.|-- abilene-as11537.gigabitet   100   100  0.0%   93.7  98.0 159.5  11.7   0.0  7.8 65.7 149.4
cnst
  • 417
  • 6
  • 11
  • Did any answer help you? if so, you should accept the answer so that the question doesn't keep popping up forever, looking for an answer. Alternatively, you could post and accept your own answer. – Ron Maupin Jan 05 '21 at 19:11

1 Answers1

4

(Fun fact: I used to work with the guy that originally wrote mtr in '97)

If you check out the source on github, specifically in net.c, there's this tidbit:

  int jinta;        /* estimated variance,? rfc1889's "Interarrival Jitter" */

RFC 1889 is the RFC for RTP, later superseded by RFC 3550. Here's the excerpt detailing how interarrival jitter is calculated (taken from RFC 3550):

An estimate of the statistical variance of the RTP data packet interarrival time, measured in timestamp units and expressed as an unsigned integer. The interarrival jitter J is defined to be the mean deviation (smoothed absolute value) of the difference D in packet spacing at the receiver compared to the sender for a pair of packets.

As shown in the equation below, this is equivalent to the difference in the "relative transit time" for the two packets; the relative transit time is the difference between a packet's RTP timestamp and the receiver's clock at the time of arrival, measured in the same units.

If Si is the RTP timestamp from packet i, and Ri is the time of arrival in RTP timestamp units for packet i, then for two packets i and j, D may be expressed as

D(i,j) = (Rj - Ri) - (Sj - Si) = (Rj - Sj) - (Ri - Si)

The interarrival jitter SHOULD be calculated continuously as each data packet i is received from source SSRC_n, using this difference D for that packet and the previous packet i-1 in order of arrival (not necessarily in sequence), according to the formula

J(i) = J(i-1) + (|D(i-1,i)| - J(i-1))/16

Whenever a reception report is issued, the current value of J is sampled.

The jitter calculation MUST conform to the formula specified here in order to allow profile-independent monitors to make valid interpretations of reports coming from different implementations. This algorithm is the optimal first-order estimator and the gain parameter 1/16 gives a good noise reduction ratio while maintaining a reasonable rate of convergence [22]. A sample implementation is shown in Appendix A.8. See Section 6.4.4 for a discussion of the effects of varying packet duration and delay before transmission.

John Jensen
  • 8,997
  • 4
  • 29
  • 47
  • So, what does it actually mean? Why would I want to see such a statistic, how does it benefit an average person in judging what's up with the connection? – cnst Dec 03 '13 at 02:58
  • Jitter is simply the measurement of delay variation in between packets in packet streams. If you're testing the connection for applications that are jitter-sensitive (ie VoIP), this would be useful to you. For average troubleshooting purposes however (especially over the Internet), the usefulness of this metric is debatable. – John Jensen Dec 03 '13 at 03:13
  • Well, my interest is primarily in packet loss and congestion, and jitter is an indicator, although [my own definition](http://networkengineering.stackexchange.com/q/5127/296) of jitter seems to include the increase in latency over the lowest possible time that a packet travels a given route. So, basically, for my own purposes, it would seem like both the `Javg` and `Jint` is not very useful? Especially the `Jint` in a `--report`-style output, since the value would not reflect the whole count, as is it mostly based on the newest packets? – cnst Dec 03 '13 at 04:00