I’m writing these thoughts on the morning of 20 December. From my desk in Tunbridge Wells I am normally gently aware of air traffic at five thousand feet, lining up to land at Gatwick Airport. Today it is silent: since 9.00 last night some idiot has been flying drones across the runway.
An airport is a finely tuned machine, with all the interconnecting parts from pilots to fuellers, baggage handlers to baristas, immigration officers to wheelchair pushers working more or less smoothly together to provide a reasonably painless experience. Today a chump with a grudge and a thousand dollar toy is breaking that system – and ruining the travel plans of 100,000 or so people.
Not that drones are a bad thing, of course. Indeed, has there ever been such an affordable addition to the kit that has had such a dramatic impact on production values. Even the lowest budget documentary can now afford soaring shots, thanks to drones – flown according to the rules by professional pilots.
In other news, it has recently been announced that Crossrail, the new east-west train service across London, is now likely to be a year late in starting up. There has been much bluster in the press and of course endless political point scoring, but the fundamental problems come down to project management and software integration.
New trains, new signalling systems, new automatic control so the carriage doors line up with the doors on the station platform: these and many more need to work together perfectly. Managing those integrations is proving a challenge.
You do not have to be a genius to see the parallels in our daily lives. Say what you like about the inefficiencies of traditional broadcast technology, but having dumb, single-functional black boxes meant that systems engineering was relatively straightforward, and you could add new bits of hardware as you needed them.
Please do not get me wrong: I certainly appreciate that software-defined systems offer huge potential for much more capable, more agile, more cost-effective installations. But now you have to ensure that one application plays nicely with all the other applications around it, under all conditions. And that sort of rigorous testing is a new set of skills.
We cannot turn our back on the software-defined future, because that is what our audiences demand. Recent research by Mindshare identifies the three trends for coming years as more consumption, more fragmentation and more personalisation.
The prediction is that by 2022 we will be watching 10% more content. True, we will watch 30 minutes a day less of proper television, but that is balanced by 62 minutes more video on demand. Some of that may be cat videos on YouTube; most of it will be professional programming, just when we want to watch it. That shift is where the fragmentation comes in: we will pick up the content from wherever we can find it.
I think personalisation is the interesting area for growth. Addressable advertising will become hugely important because it is smart. The winner of one of the IBC Innovation Awards in 2018 was a system at Medialaan in Belgium which allowed the broadcaster to show fewer commercials. That sounds counter-intuitive, but the project, powered by Yospace, allows viewers on catch-up to literally catch-up by shortening the breaks.
For this not to be commercial suicide, the spots which are shown have to be genuinely interesting to the individual viewer, and not to have been repeated too often. There are fewer commercials shown but they command a premium price because audiences will actually watch them.
But it is not just advertising that will be personalised. One of the big talking points of 2018 was Bandersnatch, a new episode of cult series Black Mirror. This actually implemented the interactivity idea that has been proposed for 30 years: throughout the narrative there are decision points at which the viewer determines what happens next.
That means the producers had to publish around five hours of material for a narrative of around 90 minutes. Commentators with varying levels of mathematical ability claim “billions” of paths through the story. There are five different endings. It has proved very popular.
And yet… One of the challenges of fragmentation is that the content deliverer has to provide a common experience across multiple platforms. In the wake of Bandersnatch there are viral memes about the fact that it does not work terribly well (if at all) on Apple devices, which surely is a problem.
Which brings us back to where we started: the challenges of system integration and random use of hardware problems. With few standards out there, how can you possible check that your interactivity – whether for advertising or for programmes – works on every device? And on every software release of every generation of every device? And that a handcrafted hardware/software combination is not going to bring the whole network down?
I may be going out on a limb here, but I wonder if this is the sort of area where artificial intelligence might prove really useful. Could we get machines to check if our stuff is going to work, whatever we throw at it?
There is a newish AI technique called generative adversarial networks (first described in 2014, according to Wikipedia). In essence, one neural network works its way through a problem, while a second – the adversary – tries to spot what is wrong with the solution. Together, they should produce a better final result.
This approach has already been applied to image sizing. Topaz Labs, for example, offers resizing and image processing software based on a GAN approach, primarily for stills photographers looking to output poster-sized prints.
The challenge for all AI approaches, though, is that they are massively processor-intensive. And, if they are to make reliable decisions, they need to compare huge amounts of data. The cloud is the usual solution, but that in itself brings its own limitations.
Specialists are now looking at dedicated AI processors. UK company Graphcore has just raised $200 million from investors including Microsoft and BMW to develop chips specifically for machine learning. Just as we choose the GPU today, in future we may choose to add an AI card to our workstation.
So the prediction for 2019 is that it is going to be another exciting year in broadcast and media. As is usually the case, the hype that is clattering around us is not where the real action is going to take place.
Data is going to get ever bigger. We are shooting at higher resolutions, with more cameras, and want to keep everything to make decisions later. Storage specialist Caringo now says that modern production workflows now need five layers of storage: fast solid-state memory, a fast storage area network, a NAS for working stores, a local archive and a cloud archive.
You may feel that is over the top. In the US, the Apple retail store is now offering specialist video servers from LumaForge, along with plug-in applications for editing software, offering a seemingly simple plug and play solution. You would probably want at least some of those layers of archiving and protection, though.
Artificial intelligence is going to become more important, although possibly not in the places that we think it might. Video analysis to spot faces and locations sounds great, but it is just not sufficiently reliable or practical just yet. But it could be beavering away in the background doing stuff that we have not even thought about yet.
5G will continue to be talked about, but search for an application. BT recently ran a very high profile demonstration of 5G coverage of football at Wembley, but I am still unconvinced. Getting sufficient bandwidth exclusively reserved for multiple cameras is going to remain a challenge. Like the whole internet of things industry, it has promised much but is yet to deliver real benefits.
Fragmentation is going to stretch the minds of the industry, because it has moved from simply ensuring the right codec goes to the right device to something much more interactive and – from the viewpoint of the deliverer – invasive. When everyone is encouraged to develop their own Alexa Skills, then ensuring that your playback system can deliver the right elements of interactive content in the right order on the right commands becomes a challenge.
Finally, our technology platforms will be software, running on standard computers aided by GPUs and possibly, in time, AI boards. Already the clever people around the industry are using GPUs to do things for which they were not originally conceived – see how Comprimato uses a GPU as a massive parallel processing device for fast encoding and decoding, for instance.
But the structure of the standard computer, and interfaces to GPUs and other hardware plug-ins are not ours to control. They belong to the much larger IT industry. If one of heavy-hitting users of IT – finance, say – needs it to change, then be in no doubt: it will change.
Building and maintaining those software infrastructures, then, will require a new set of project management and maintenance skills. There are some great jobs out there for those who are up for a challenge. I am definitely looking forward to 2019 and beyond.