Three big questions on OTT

Simen Frostad

Author: Simen Frostad

Published 1st January 2014

by Simen Frostad
Issue 84 - December 2013 Every media operator wants to deliver OTT services, or is already doing so. Growth over the past 18 months has been phenomenal, and a media operation without OTT is almost inconceivable now. But for many coming from a broadcast background, the technical side of OTT is terra incognita, and a mixture of fear and misconceived ideas can inhibit even grizzled veterans of broadcast technology. Likewise, those from an IT background often approach the challenge of OTT with little or no understanding of the broadcast world and its technologies, and struggle to grasp that technical issues with media delivery cant always be solved by throwing more IT at the problem.
So the kind of questions people ask us about OTT centre on how to plan and operate a reliable infrastructure, and how to guarantee service quality. Here are a few questions that come up frequently:
How do we integrate the OTT model with our other media delivery models?
This is a key challenge because OTT and broadcast (not to mention cable, IPTV, and satellite) involve different technologies and trying to build and operate different islands of technologies in parallel can be a nightmare. In fact a poorly planned and inefficient approach can generate so much drag on an organisations performance as to pose a real threat to its survival. Aside from the heterogeneous infrastructures, with all their associated investment and maintenance costs, different technologies require different skills and expertise during planning, build and operation. Since there isnt a ready stock of staff with skill sets that span both broadcast and media-via-IP, getting an OTT operation off the ground requires the recruitment of a new team of IT engineers, and that has implications for the payroll. But whats rather more difficult for the would-be OTT provider to assess is the cumulative impact on costs and performance of many small inefficiencies all added together. For example: its easy to forget that encoders are fairly complex, and if you have encoders from several vendors you need more people to cover the expertise necessary to manage and operate them, whereas if the same encoder is used throughout, the staffing demand is lower. The same applies to multiplexers, or any computers use for transcoding, cacheing, or as origin servers for OTT. So in any media organisation, widespread deployment of heterogeneous technologies actually becomes a challenge to operational efficiency and a threat to business efficiency: the technologies can become part of the problem, not part of the solution.

The same is true for the monitoring solution: if an operator has one set of tools for monitoring the RF signal from satellite, another set of tools for the IP, and yet another for OTT it\'s very difficult for engineering staff to span all of these domains and get a consistent understanding of what is happening and why. Silos of expertise tend to develop around each set of tools, at the expense of the big picture necessary to trace and rectify faults. We could draw an analogy here if you imagine a medical profession in which there are only specialists, and no GPs: it would be much more difficult to treat the patient rather than the symptoms someone experiencing suddenly deteriorating eyesight might just be prescribed with bifocals and never be tested for diabetes, for example.
So the more integrated the OTT services are with the overall operation, the better. And the more integrated and silo-free the monitoring system is, the better it will help operators to identify the symptoms, pinpoint the real cause (even if its quite remote from where the symptoms are occurring), and resolve the problem. This brings us to a related question, which concerns the way in which the heterogeneous technologies involved in delivering OTT can be made to seem integrated and transparent to the operators staff, who may be from either a broadcast or IT background:
How can our staff monitor technologies they arent fully familiar with?
The last thing an operator needs is an engineering staff divided into separate camps huddled around separate racks of gear, one muttering its a broadcast problem, while the other says its an IT problem. This is more likely to happen when the operator tries to build a monitoring system from a collection of broadcast-specific tools and IT-specific tools.
Even if a motley collection of tools is gathered together under the umbrella of a central NMS collating all the alarms, whenever a problems occurs engineers have to dive deep into the specific tool-silo where the problem has surfaced in order to get a more detailed picture, and this detail cant be easily related to any other part of the delivery chain. The overview is lost, and with it the ability to understand and relate symptoms to root causes (which may lie elsewhere, in a different technology).
The main challenge for manufacturers in the test and monitoring sector is to provide toolsets that offer homogeneity, allowing users to navigate across the boundaries of the different technologies without having to learn new skills when they do so. To draw another analogy, we are used to driving across borders now with hardly any inconvenience: we may not speak the local language, but we dont have to learn a whole new set of road signs or exchange our steering wheel for a joystick. Our satnav navigates us to the destination we are aiming for, even if its in Athens or Ingolstadt, and not our home town. We are travelling in a different territory, but we still have enough familiar tools to arrive safely. A monitoring system should do something similar, and allow staff from both broadcast and IT backgrounds to cross borders with a degree of confidence, knowing that they can follow the same road and interpret the landscape with familiar conventions. Its extremely important therefore to provide a very visual monitoring interface that helps users grasp the overview, identify trends, and understand easily what the different chunks of data are and what they represent. Instead of a very abstract window on the data screens full of numbers and one-line errors the interface should give the user a clear and easy sense of how the delivery chain behaves, almost helping the user to develop a feel of how it works, without being overwhelming or baffling. Even when its necessary to delve deep into detail, this is much easier to interpret when its presented in a consistent, familiar and graphical form and always correlated to the overview, so that symptoms and cause can be quickly linked.
Where should we monitor our OTT operation?
This is a question to which there is an apparently simple answer. Monitoring data should be gathered from every point in the chain where the condition of the stream could potentially be affected. Its an apparently simple answer because not all monitoring solutions are able to do this. But to provide complete assurance of service quality, a monitoring system should be able to give you data from your content ingest point, your origin servers, before entry to the CDN and after the CDN, and from all of the end user devices. An advanced highly-integrated monitoring solution tells the operator whats happening at the origin servers, how well each CDN is performing, what the 3G or 4G performance is like, and correlates all the data into a coherent whole, so that QoS and QoE are all part of the picture.

Related Listings

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.