Powerful New Levels of Discovery

Drew Lanham

Author: Drew Lanham

Published 1st July 2014

by Drew Lanham Issue 90 - June 2014

For media creators, owners, and distributors, the amount of digital media in their libraries never stops growing as they continue to add new content every day. Those who dont organize and manage their data well are not getting the maximum value out of their archive. Thats why they need easy, accurate, and cost-effective tools to discover, repurpose, and monetize their media.
MAM or product asset management systems hold a lot of file-based metadata with attributes such as the date the footage was shot, but theres usually very little descriptive metadata about the content, making it difficult to find an asset quickly and accurately. To make matters worse, manual logging and transcription are not only time-consuming but prohibitively expensive, and yield limited detail. Transcripts are yet another asset to manage and is not directly tied to assets. There are software applications available that can semi-automatically tag assets, but they can be expensive and also might not provide enough detail to be useful. Image recognition might reveal whos in the media, but not what theyre talking about. Speech-to-text has insufficient performance and accuracy to be useful even on the clearest speech.
Fortunately theres a better way to do it automated phonetic-based dialogue search. Dialogue is the most abundant source of metadata in media. It is present in almost all program types and often provides more detailed, precise content description than any other metadata. As a result, phonetic-based dialogue search provides the richest, most relevant results at a fraction of the cost and time it takes for the other methods can in providing the same amount of metadata granularity. Its akin to Google for audio, where users enter keywords and phrases that locate the specific moments in the media, allowing them to take further action.

How It Works
In the phonetic-based search method, an application scans all of the audio in a media library and creates an index of sounds called phonemes, hundreds of times faster than real time. Once the assets are indexed, they are instantly searchable based not on the file properties or the information that has been typed into the metadata fields, but on what is actually spoken on the audio tracks. That means that traditional metadata descriptive or otherwise is optional once the index is created. (Even so, the system can still leverage any existing metadata, in combination with the dialogue, to further refine and improve the search.) As a result, the phonetic-based search method dramatically reduces logging and transcription costs, speeds production, and uncovers valuable assets that traditional metadata could never expose within hours of being added to the management system.
Phonetic search solutions integrate directly with MAMs, file systems, and editing applications, and no training is required. Users simply type any combination of words or phrases into the search interface, and it will quickly find any media clip in the system where those words or phrases are spoken. Users can then go to an integrated media player to audition the hits within each clip, quickly without having to scroll through numerous clips to find a specific asset. When the correct results are found, they are exported as timecoded markers to MAMs and video-editing applications.
Users can also deploy phonetic search technology to search from within their own production toolset via APIs. For example, Adobe Premiere Pro users can leverage the benefits of phonetic-based search functionality directly from Premiere Pros interface without having to leave the Premiere Pro application. By simply adding an extension to their workspaces, users can search and preview search results in a video player and then just drag and drop a clip into their project with the search terms and search terms automatically displayed as markers in their clips on their timelines.
Intended Applications
Versatile, affordable, accurate, and lightning-fast, phonetic-based search tools can change the way media operations discover and use their assets. They can be applied broadly to any market that creates, owns, or distributes content, including film and entertainment, sports, news, education, corporate, government, financial, house of worship, and nonprofit. As media-driven organizations build up ever larger stores of video content, or mine their archives, the need to locate specific pieces quickly and efficiently will play a huge role in an organizations competitive advantage and bottom line. After all, if you cant find it, then you cant monetize it. Creating a searchable index that does not rely on traditional logging or transcription metadata such as the one that results from phonetic-based searching can be the key to unlocking a media archives potential.

Related Listings

Related Articles

Related News

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.