QOS - Its time to take a more objective approach

Simen Frostad

Author: Simen Frostad

Published 1st December 2013

by Simen Frostad
Issue 83 - November 2013 Any business that fails to satisfy its customers is almost certain to fail if theres significant competition to it. Digital media businesses are very definitely competing in a market where the customer can switch quickly if not satisfied, so its vital for providers to know when their customers are getting a satisfactory service, and when they arent.
In the conventional wisdom QoS (Quality of Service) monitoring can provide diagnostic information on the devices in the delivery chain and the transports streams, while the QoE (Quality of Experience) element is intended to provide a user-centric evaluation of the service, pointing to any faults that have an impact on the users experience.
The reality though is that the conventional approach to QoE monitoring is based on some questionable assumptions, and has its origins outside the television/digital media industry. To monitor the experience of the user, so the thinking goes, you have to ask users to provide a subjective assessment. To gather the data from subjective assessments into a usable form, a methodology was developed and called MOS (Mean Opinion Score), specified by the ITU-T (International Telegraph Union) recommendation P.800. This was deployed when telcos were developing new infrastructure or, more recently, services such as VoIP. A panel of expert assessors would listen to the service and note their evaluation of quality while key phrases such as You will have to be very quiet, There was nothing to be seen, and They worshipped wooden idols were transmitted over the line. The experts would record their scores, assessing any impairments on a scale of 1-5, from imperceptible to very annoying. The scores would then be averaged and weighted using the kind of statistical manipulations common in the social sciences and market research. The same basic concept of QoE monitoring later took hold in digital media monitoring and MOS spawned VideoMOS and AudioMOS criteria which brought opinion-scoring methodologies into the media arena. But media providers need to know what real users are experiencing, rather than panels of expert assessors. So MOS evaluation became robotized and algorithms were developed to simulate those subjective reactions from imperceptible to very annoying. The robots watch the service and the data is fed into the algorithmic engine, which attempts to simulate the subjective reaction of the viewer, with scores for factors such as jerkiness, blurriness and blockiness.
This might appear sophisticated, but its really a tortuous evolution of an opinion-gathering methodology developed for an entirely different industry, and as the word opinion in the term MOS suggests based on the subjective reactions of human assessors. Subjectivity is complicated: a human viewer watching a top-quality 1080i transmission of the Superbowl, followed on the same channel by the 1940s B&W movie Casablanca might be fully satisfied. Yet in a robotized QoE subjective assessment based on MOS criteria, the Superbowl would score highly, while Casablanca would be marked way down for blurriness, scratches and other artifacts, for its lack of resolution and color. Distorted by results like this, the data from this kind of QoE system becomes less reliable to the provider.
Its time to take a more objective approach. And it can be done, because there is now the technology to monitor the quality of the service continuously and direct from the users device connected TV, tablet or smartphone. The Objective QoE solution launched by Bridge Technologies at IBC 2013 will collect data from the entire delivery chain, from the head end right up to and including the viewers devices. So, rather than have a robot watching the service and simulating the response of a mythical average viewer, media providers will be able to get real data direct from the viewing devices. And what makes this new approach objective is that the QoE evaluation is made from empirical factors that lower the quality of digital media services such as lost packets, timeouts, and buffering. If you can accurately and continuously detect that a viewer is experiencing any of these errors, you no longer need to confect an algorithmic opinion about it, because you have a completely trustworthy, objective set of data on which to assess in real time the quality of each users experience.

Related Listings

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.