State of the Nation - The chances of anything coming from Mars

Dick Hobbs - new

Published 27th January 2020

State of the Nation - The chances of anything coming from Mars

Among the autumn dramas the old-fashioned broadcasters have offered us this year was a new adaptation of the HG Wells classic The War of the Worlds. Writer Peter Harness stayed pretty faithful to the book, retaining its setting of leafy Surrey and focusing on the human stories.

But any mention of The War of the Worlds inevitably brings up the CBS Radio production of Hallowe’en 1938, in which Orson Welles and his company allegedly created panic across the USA. CBS was apparently very unconvinced by the very idea of adapting a cosy English novel for primetime radio, so Welles relocated it to Grover’s Mill, New Jersey, and used the trick of breaking news bulletins to reveal the scale of the invasion.

Orson Welles was a very clever man – much cleverer, one suspects, than the CBS management – and he was meticulous in his planning to create “panic”. In particular, he knew that at the same time rival broadcaster NBC was airing its popular Sunday night variety show, The Chase and Sanborn Hour. He also knew that, at a pretty predictable time in the show, there would be a song from a guest singer, which was the cue for the audience to reach for the tuning dial.

On this particular Sunday night, it was the otherwise blameless Dorothy Lamour who caused some of the audience to switch to CBS, and Welles ensured they would join the drama at the moment of one of the most shocking breaking (fake) news bulletins. In simple terms, he set out to create panic, even though he claimed he had not.

We were much less worldly-wise in 1938, you may say. But the autumn of 2019 in the UK saw an election campaign which was marked – many would argue dominated – by fake news, by carefully placed stories which were not what they seemed to be. Politicians set the agenda, inventing facts where necessary.

Developments in technology over the 80 years since Welles and The War of the Worlds mean we now have the possibility to make those fake stories – enhanced information, the spin doctors would argue – extremely convincing. That technology in turn leads to what is called the “deepfake” – something so convincing that it is only the context that makes you realise that it is indeed fake. There are deepfakes of Trump in Breaking Bad, for instance.

Deepfake software is readily available. FakeApp is a free download, although it is a pretty labour intensive solution. Zao, developed in China, is also free, but contains a nasty little sting in the tail that uploads every clip to a backend cloud, so even if you delete your harmless little joke, it will be on the internet forever.

FaceApp (note the similarity to FakeApp) is the Russian equivalent, also storing your work and quite possibly all the rest of your data, too. Tech website The Register went so far as to translate the Zao software licence (otherwise only available in Mandarin) and it explicitly states that “personal information will be collected without consent if the data is relevant to issues of national security…”.

What all deepfake applications have in common, though, is that they are based on artificial intelligence, or more correctly machine learning. The software learns the facial movements of the target person and recreates those movements as he or she says something they never actually said or did.

That is one of the reasons that artificial intelligence is one of those phrases I hope we will be hearing less of in 2020. Like cloud. And virtualisation. And even IP. Because they are simply enabling technologies, not anything useful in themselves. There is always the risk that such technologies will mean applications have to be created which, like the deepfake, may not be to our benefit.

When I was a lad, learning to write marketing copy for technology, the one fact that was drummed into me day after day was the difference between features and benefits. Features may be great – the latest widget might have a cloud-based, fully virtualised, machine learning core. But nobody cares.

People only care – and more important only spend money – if there is a real benefit. Does the widget help you make better programmes. Or make them quicker. Or cost less.

Take, for example, the first of the IBC Innovation Awards to be presented back in September in Amsterdam. The winning project used IP architectures and clever streaming technology. Masses of content passed through the cloud. Graphics on everything from crew weights to wind direction were supported by artificial intelligence.

But that was not what impressed. The prize went to Sail GP for introducing a brand new sport: one which takes place in locations around the world. Given the difficulty of launching a new sport, the only way to create exciting and engaging content and make it affordable for television and online was to go for remote production on a massive scale. That’s the benefit: you make great television for a sensible budget.

Artificial intelligence is a feature. It is a way of doing something. But that something has to be worth doing, it has to deliver a real benefit.

A couple of months back I wrote in this column about how the internet is becoming a major environmental concern: it already has a bigger carbon footprint than air travel, because of all those server farms and their associated air conditioning. Online porn consumes more energy than Belgium. In just a couple of years, video streaming will represent 80% of all internet traffic.

What if you could reduce that figure, by making the data streams smaller. Codecs continue to develop, of course, but standardisation and ratification is a slow process and anyway, the more powerful the codec the more processing grunt it requires to encode it.

I have recently come across a company called iSize Technologies, which is developing a video pre-processor. iSize has reverse engineered human visual perception, to understand what we actually see. It can then use artificial intelligence to minimise the parts of the picture we are never going to look at.

They currently estimate they can reduce the size of the video stream by 20 – 40%, before the video goes to the codec. Once it gets to the consumer device it is decoded as normal, and should look as good as normal. But the data rate is significantly lower.

I have also recently been talking to Dominic Harland of storage specialists GB Lab. He was the one who sowed the seed of this column, of suggesting we stop talking about virtualisation, the cloud and AI. He would rather we talked about flexibility.

We are still in thrall to technology. Small to medium post houses, Dominic suggests, have productivity bottlenecks because, while all the software tools run on standard hardware, to get the best performance each function needs a different combination of speed, scale and security in its server networks. There are also the dull but necessary tasks – like rendering, backing up to LTO and the cloud, and creating multi-format deliverables – that take resources away from making paying clients happy.

Artificial intelligence should be handling all this stuff, including fine-tuning the storage performance on the fly to meet user requirements as they change. You shouldn’t have to support multiple storage networks, just as you shouldn’t have to worry about background tasks – computers should be clever enough to sort that out for you.

That is my wish for 2020: that we let people who care about artificial intelligence, and virtualisation, and all the other buzzwords work quietly away in the background, because we need them to come up with real solutions we can actually use, not tell us the limitations because it doesn’t quite work like that.

IP infrastructure: of course we can. But there are multiple SMPTE papers on IP video timing and we still have no agreed best practice. That makes IP connectivity a feature not a benefit, and we know which one we want.

Artificial intelligence: please stop trotting out the obvious applications, especially since no-one has actually made them work. Facial recognition takes so much processing power it is cheaper to have someone watch the monitor. Tracking action in sports requires a really subtle and dynamic knowledge of where the action is. Hint: it is not always the man with the ball.

Cloud: what is it for? What benefits do we gain from adding to our environmental footprint by streaming vast amounts of data off to some remote site and back again? Is it going to make us more productive? Is it going to save us money? Is it going to make for better content?

In short: why should we care about features? Are they the ultimate saviours of our industry? Or is it all just fake news?

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.