The human element in digital media monitoring

By Simen K. Frostad, Chairman

Published in Broadcast Bridge, May 2016

If anyone has written a history of work, it will no doubt have a prominent theme: the way machines have allowed people to do progressively more and be more productive. The human race isn’t the only species to use tools, but we are the only species that puts so much effort into developing and evolving them so that we be ever more efficient in our work.

Sophisticated tools, and the power they give us to alter the conditions of our own environment, have enabled us to become the dominant species on the planet. And if we only wanted to till the earth and reproduce ourselves in comfort and plenty, we might have reached the point by now where there was no necessity to keep evolving and new tools. But of course, we have greater ambitions and a seemingly endless desire for more.

This is the dynamic that drives industry, drives our behaviour as consumers, and our scientific enquiry. And although there are more humans on the planet than ever before, our impulse to know more, create more and have more means that we keep inventing new kinds of work – so much of it that there aren’t enough people to do it all.

The work that we now call ‘digital media monitoring’ didn’t exist until quite recently. Digital media delivery didn’t exist either until only a few years ago. But this field has expanded explosively in a short space of time, creating a massive amount of new work that has to be done. And the volume of this work, and the rate of its expansion, is so great that it could not be done without the development of extremely powerful new tools.

And in this situation, much of the power of these tools comes from the gearing between the worker and the work done. It’s different from, for example, an accounting firm that takes on more auditing work; to get the audits completed, the firm has to hire more auditors, and each can only do roughly the same amount of work as the other. There’s no other way to get the job done. But in digital media monitoring, the sheer amount of data to be analysed and understood every second of the day makes it impossible to take this approach. The tools have to provide enormous gearing, so that a single engineer can do an otherwise inconceivable amount of work, and do it in real time on live services.

In the short space of time since digital media delivery began, monitoring solutions have evolved from the makeshift, sticking-plaster type, cobbled together from heterogenous kit designed for a previous era of media operations. Now, the only viable systems are those designed from the ground up for the digital media monitoring era, with fully integrated end-to-end capability. These systems give a coherent view from the initial ingest right through to the viewer’s device. But while it’s essential to have this coherent, omniscient capability, it does create a truly staggering amount of data to deal with. And the engineering control room of a digital media operation is not staffed by thousands of toiling technicians like some Orwellian Ministry of Data. Even in the largest operations, a handful of engineers have to be able to cope.

And in fact a handful of highly-qualified engineers is about all any digital media operator can hope to get hold of, because the expertise in this relatively new field is in high demand and short supply. So it’s vital that the monitoring systems don’t demand too much of the staff who use them. A system that spews out unmediated data and fills screens with dense numeric tables is not an efficient one because it sucks too much of the engineer’s mental bandwidth, and that means that the gearing between worker and work done is too low to be economically viable.

In the early years of digital media, it was just about possible for a skilled engineer to keep on top of a few channels just by eyeballing the monitoring displays. Now, that same control room is probably having to deal with hundreds of channels, and that simply defeats the capabilities of a few pairs of  human eyes, unless the monitoring system can do a lot of the work under the hood. Instead of a dense flow of numerical data, the hard-pressed engineer needs a highly evolved data presentation that aids at-a-glance recognition of anomalies. If the system can itself point up anomalies, so much the better, but here problem is that some anomalies are not considered errors in the strict terms of the standards defining correct behaviour. Basing a monitoring strategy on simply detecting and flagging errors as defined in the ETR290 standard, is not enough because errors that may be considered acceptable within the standard can be the cause of problems that degrade the service. Some errors that cause critical failures aren’t even tested for by ETR290, so when one of these pops up, the engineer has no assistance in identifying them from the monitoring system.

So a dumb approach to monitoring – limited to testing for conditions that breach parameters defined in the standard – is not the answer to the shortage of expert eyeballs. It makes far more sense to tailor the monitoring system to the realities of a digital media business today, where hard-pressed, time-poor engineers have to stay on top of constantly increasing numbers of services. In this context smart monitoring technology makes it possible to quickly set up the correct parameters for a new service (this would otherwise be a very time-consuming and error-prone task in itself), and embraces not only the ETR290 tests, but also tests for those conditions such as CAS errors, unintended language switches and so on, that are outside the scope of the standard.

The smartest monitoring technology also lets engineers easily bring together a customised flight deck of monitoring instruments to best communicate the unique data required in each operational situation. This display of virtual data instruments can be reconfigured at the drop of a hat when requirements change, for example when a major live event is scheduled. And, crucially the data wall can be viewed not only in the engineering control room, but from any location in a standard internet browser. So the data can follow the eyeballs, wherever they are.

Bridge Technologies Reinforces Commitment to Interoperability at IBC 2017
At IBC, Bridge Technologies (Stand 1.F68) is focusing on how the company is responding to the industry requirement for maximum interoperability with ST2110 signal vendors and at the same time contributing to the AIMS, VSF, SMPTE, AES interop showcase with new analytics gear.
Bridge Technologies VB440-V Virtual Probe Wins IABM Award for Test, Quality Control & Monitoring
Bridge Technologies today announced that its innovative VB440-V virtual probe was chosen by the judges as the winner of the 2016 IABM Design & Innovation Awards in the “Test, Quality Control & Monitoring” category.
Bridge Technologies Remote Data Wall “Highly Commended” by CSI Judges
Bridge Technologies today announced that its innovative Remote Data Wall was ‘Highly Commended’ by the judges in the ‘Best monitoring or network management solution’ category of the CSI Awards at a presentation at IBC on 9 September.
Bridge Technologies Addresses New Classes Of User With Uniquely Portable Probe for Monitoring and Managing IP Networks
At IBC 2016, Bridge Technologies launched NOMAD, a unique, innovative and affordable tool for anyone tasked with managing, supporting and optimising IP networks and hybrid networks with RF signals.
Remote Data Wall (RDW) Receives IBC 2016 Innovation Award from Broadcast Beat
Remote Data Wall enables users with no special skills to create displays, extending over multiple screens in a videowall format, that deliver graphical representations of a broad range of data, significantly easing the monitoring, analysis and troubleshooting of media networks.

The OMEGA Program