Parting the Cloud

Bob Pank#

Author: Bob Pank#

Published 1st October 2012


Every so often a trend looms over the broadcast horizon that gathers pace so quickly it soon becomes the buzzword at every launch and trade show. A few years ago it was MAM and more recently stereoscopic; now the big buzz in town is the cloud. But despite its recent rise to fame in the broadcast sector there’s still a lot of confusion regarding how manufacturers can develop and manage cloud computing technologies. The change in workflow appears to be driven by metadata and graphics but as more content is created and consumed the requirement for longer term, more generic repositories increases.
Segmenting the archive in the cloud
Not so long ago, a broadcast archive was bespoke. It could only hold certain types of files, and the archive and restore processes were hard coded between each system connected to it. It was not uncommon for only one system (e.g. playout automation) to be connected to the archive. Recently, a more distributed approach is being adopted. Those building an archive for playout are over-specifying the hardware required, not just to cope with normal organic growth, but also with a view to opening up the archive to other departments. One reason for this shift is the implementation of improved mechanisms that exist within archive management software for content organisation. Now virtual partitions can be created within different tiers of storage, each assigned to different systems. For example, it’s now relatively common for one archive to service playout transmission, production, news and graphics. All departments have their own specific workflows and tools, but through careful planning and versatile interfaces all this can easily be achieved under the same private cloud.
Physical storage media have also become more diverse. Today’s choice of disk storage is massive, and tape libraries are highly scalable. However, none of these individual storage solutions alone can form a true broadcast archive that can grow and be widely shared. This remit falls firmly on the shoulders of the archive management solution, commonly referred to as middleware. Without this layer a cloud for long-term storage simply cannot form.
Why form an archive cloud?
There are significant advantages to adding a tape library to archive disk storage, whether locally or remotely to provide system resilience. However this only works when heterogeneous storage can be presented to the outside world as one virtual storage entity. This is where cloud storage comes in. With the application of simple web-style APIs content can be moved into a generic ‘archive’ without the controlling system having to know if it’s going to disk or tape, locally or remotely. It’s still important that the API supports the concept of specific storage locations within the archive, but these archive groups can be used as virtual containers to segregate different content, and different migration rules can be applied to them. The value of the archive management system is the automatic migration of content between storage tiers.
Bandwidth
Not all facilities own dark fibre capable of faster-than-real-time transfers right up to the doorstep. Obviously for hosted services (the remote cloud) video tapes can be transferred. However, this still requires that the service provider has the infrastructure and skillset required to ingest and tag the content, and introduces another level of complexity when ensuring that remotely ingested content can be accessed through its corresponding metadata.
From a disaster recovery perspective this approach duplicates effort. However, the technology does exist to move bulk quantities of content plus metadata as files relatively inexpensively. One such technology is LTFS. This standardised approach of writing data to LTO-5 (and upwards) tape provides a viable method of transferring bulk content into and out of a storage cloud. For example, a playout facility migrates content into the archive on site as normal i.e. ingest from video tape and QC before moving to online playout storage or copying directly to the archive. Once in the conventional archive domain, migration rules copy the content to an LTFS ‘export’ tape (or group of tapes, where required). Any metadata is archived along with the content. Alternatively, MXF wrappers like AS03 or AS11 (AS02 etc. for production-related workflows) can be used to encapsulate the content and its metadata, ensuring safe passage of all necessary information. Once full, the LTFS export tape is removed from the library and posted to the remote cloud storage facility.
On arrival in the ‘cloud’ the LTFS export tape is inserted into the library. The archive management system scans the index at the header of the tape and populates the archive management software database with the necessary information about the content. Now that the content is available, any systems connected to the archive cloud can query the archive database to find out what is new (or receive ‘push’ notifications of new arrivals) and access the content. This approach ensures that the systems closer to the business process rules remain in control of the content.

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.