Cause and effect of jitter in an operational IP video network

Paul Robinson

Author: Paul Robinson

Published 7th December 2016

Cause and effect of jitter in an operational IP video network

What are the advantages of moving to an IP infrastructure?

The most commonly cited advantage of deploying IP Video networks in production and other operational applications is the ability to use Commercial Off-The-Shelf (COTS) IT-based infrastructure, which takes advantage of the economies of scale of the IT industry when compared with the relatively small broadcast industry. Additional advantages of reduced cabling cost and weight along with the much greater routing flexibility that offers more flexible production options. These advantages mean that in many parts of the World, trials, proofs of concept and early deployments of IP Video networks are already in place.

What are the biggest challenges when moving to an IP infrastructure?

IP brings both technical and skills challenges. The technical challenges include jitter; latency; the risk of dropped packets and network asymmetry which results in different path delays upstream and downstream.

Deploying IP for video production applications is effectively the collision of the two Worlds of video engineering and network engineering. Video engineers are comfortable with the use of SDI, coax., patch panels, black burst and tri-level for timing and above all, monitoring signal quality. The challenge for the video engineer is to understand IT technology and impact of an IT infrastructure on the video.

On the other hand, network engineers are familiar and comfortable with, IP Flows, Protocols, Network traffic, Router Configuration and Precision Time Protocol (PTP) and Network Time Protocol (NTP) for timing. The biggest difference however is that in most data center applications, lost data can be re-sent - this is not the case with high bitrate Video. The challenge for the network engineer is in understanding video technology and its impact on IT infrastructure.

What causes IP packet jitter?

In any digital system, Jitter is any deviation from the regular periodicity of the signal. In IP networks jitter variation of the packet arrival interval at a receiver. If the network routers and switches are all configured and operating correctly, the most common cause of jitter is network congestion at router/switch interfaces.

The application within a network element will likely require the data to be received in a non-bursty form and as a result, receiving devices adopt a de-jitter buffer, with the application receiving the packets from the output of this buffer rather than directly. Packets flow out of the buffer at a regular rate, smoothing out the variations in the timing of the packets flowing into the buffer.

What can be the impact of excessive jitter?

The rate of packets flowing out of the de-jitter buffer is known as the "drain rate". The rate at which the buffer receives data is known as the "fill rate". If the buffer size is too small then if the drain rate exceeds the fill rate, then it will eventually underflow, resulting in stalled packet flow. If the sink rate exceeds the drain rate, then eventually the buffer will overflow, resulting in packet loss. However, if the buffer size is too large, then the network element will introduce excessive latency.

How do you measure IP packet Jitter?

Jitter is measured by plotting the time-stamps of the packet inter-arrival times versus time.

This is useful to identify variances in jitter over time, but it is also useful to be able to plot the distribution of inter-arrival intervals versus frequency of occurrence as a histogram. If the jitter value is so large that it causes packets to be received out of the range of the de-jitter buffer, then the out-of-range packets are dropped. Being able to identify outliers is an aid in identifying if the network jitter performance is either likely to or already the cause of packet loss.

A series of packets with long inter-arrival intervals, will inevitably result in a corresponding burst of packets with short inter-arrival intervals. It is this burst of traffic, that can result in buffer overflow conditions and lost packets. This occurs if the sink rate exceeds the drain rate for a period of time that exceeds the length of the remaining buffer size, when represented in microseconds.

How do you establish the de-jitter buffer size?

To establish the necessary de-jitter buffer size, an alternative form of jitter measurement known as Delay Factor (DF) is used. This is a temporal measurement indicating the temporal buffer size necessary to de-jitter the traffic.

In IP Video networks, the media payload is transported over RTP (Real Time Protocol). One form of DF measurement takes advantage of the fact that the RTP header carries timestamp information which reflects the sampling instant of the RTP data packet. This is known as Time-Stamped Delay Factor or TS-DF (as defined by EBU Tech 3337).

The TS-DF measurement is based on the relative transit time, which is the difference between a packet\'s RTP timestamp and the receiver\'s clock at the time of arrival, measured in microseconds. The measurement period is 1 second, with the first packet at the start of the measurement period being considered to have no jitter and is used as a reference packet.

For each subsequent packet, the relative transit time between this packet and the reference packet is calculated and at the end of the measurement period, the maximum and minimum values are extracted and the Time-Stamped Delay Factor is calculated as:

TS-DF = D(Max) - D(Min)

The maximum value of TS-DF over a given period is indicative of the de-jitter buffer size required during that period, for a receiving device at that network node.

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.