A couple of years ago, Hermann Popp from ARRI came to me and talked about putting MXF in ST 2110. My initial take away was that his proposal felt like exactly the right way to solve the problem he presented, but was completely counter-intuitive to the IP Engineering purists who would push back on the concept of double wrapping the content.
It took me a couple of years to figure out that the problem being solved was not actually the problem initially described to me. So let’s wind back to 2017 and the topic was transporting camera metdatata over IP and getting it ready for use in live and in offline systems and doing it with standard product. In other words, let’s not invent anything new – let’s just use what’s there. We already had SMPTE RDD18 which is a way to map camera metadata into an MXF file. MXF was designed to be stream-able from day #1. MXF can easily be time stamped and streamed over any IP network and recovered at the far end. The SMPTE ST 2110 concrete mapping isn’t there, but for a proof of concept, that doesn’t matter.
So we had a plan and we built a test system and showed it at IBC. We took an ARRI camera that was able to create the real time metadata. We took a Trackmen system that generated real time camera position and tracking data. Nablet handled all the MXF wrapping and unwrapping. As you can see from the photos we managed to do real time graphics rendering after the metadata was extracted from the IP and from the MXF. We also managed to pull the data into an AVID media composer via and MXF plugin. The key take away here is that it’s the exactly the same MXF.
When you consider the bigger picture of generic time based metadata, you realise that there is a lot of it and very little of it is standardised. I was talking to a company that specialised in moving metadata from the gimbals of trucks towing trailers and transformed that data into something that could be used in a VFX system. Life would be a lot easier if there was a standard way to get that data into IP and then into a file system.
The major cost in the system is actually the engineering involved in mapping the broad range of specialist metadata typed into something standard. If you only have to do that once and not multiple times, then you’ve reduced the barrier to getting this sort of metadata done at scale.
Mapping the metadata as JSON or XML can use existing MXF standards to then carry this metadata in a time accurate fashion. Mapping the MXF in a generic way into IP means there are relatively few mappings to test – further reducing system cost. In the middle, the code that unwraps the metadata from the MXF can be created in a plugin fashion around a live / offline framework. Finally, having the metadata in MXF with deterministic mappings allows it to be carried all the way to distribution if needed.
So while double-wrapping the metadata (in MXF and IP) seems crazy, it’s actually quite an elegant way to get the metadata across the maximum number of processes in the value chain for the minimum amount of work. If you’d like more information, there is a small website at https://mrmxf.com/metastresm.