CLASS - LEAN Mean Versioning Machine

Bruce Devlin

Author: Bruce Devlin

Published 1st August 2015

by Bruce Devlin Issue 103 - July 2015

Today, there are numerous ways to access and consume media, be it through UHDTVs, smartphones, tablets or other screens, devices or services. Because there are so many outlets and even more bytes of information as a result, content owners and broadcasters are working incessantly to deliver top-quality content at record speeds. By reversioning, repackaging and repurposing media, they are able to meet and stay ahead of consumer demands.
The problem is redistributing without degrading the source quality. Media is transcoded at a number of touch points in the production and distribution process, and the average number of times content is encoded and decoded is higher than the design efficiency of most codecs commonly used by broadcasters today. The fact is that the average number of transcodes from content origination to its eventual destination is as high as twenty, and media cannot withstand the transformation.
When it comes down to it, companies who shoot or produce content arent necessarily those who will aggregate it, and those who aggregate content are not always the same as those who create the various accompanying media assets (trailers, promos, etc.). At every step, the file will be encoded, decoded and re-encoded several times. Add in additional factors, such as content destined for overseas distribution or incoming from foreign producers and broadcasters, and it may have to go through even more transcode steps before its final output. There is a significant impact on the technical and subjective quality of the media that the end user eventually sees. Media processing is CPU (or GPU) intensive, and fixing the problem is quite expensive.

To improve quality while minimizing expenses, content owners are looking for ways to reduce the number of times media is processed while simultaneously ensuring that any necessary processing is of superior quality. One method is to know where all of the raw components are when creating packages and versions so it can be virtually reassembled and stored as metadata. This leaves the source media in its original state with no quality reduction. Here, files are only re-encoded at the point of delivery. This eliminates any waste and makes sure that only that information of value appears, otherwise known as LEAN or just-in-time methodology.
This process insulates and protects operators from media manipulation and processing complexities, increasing the time for creative decisions and putting the human, rather than the machine, back into the business process.
Tracking the metadata, and knowing its origin, is another key factor in automating media processing and retaining a MAM-driven workflow. As new resolutions, frame rates and codecs emerge with the developing market, the demand for content drives the number and variety of acquisition devices up to take advantage of them. In order to keep the best end quality, content owners should understand media as in depth as possible to be able to adapt with new technology. Only then can the correct media-processing path be chosen, because there is no one-size-fits-all approach to pushing out content.
As acquisition formats are developed and technology offerings continue to expand, it is more important than ever to have a good asset management workflow. This is the only way for content owners to stay on top of the trends and manage any obstacles behind the scenes without adding additional stress to a tricky system.

Related Listings

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.