Evolving intercoms

Wednesday, September 19, 2012

Article Image
There was a fundamental disadvantage of the central matrix architecture, though: it represented a single point of failure. There was also a cost disadvantage at the time of installation, in that dedicated cables had to be run from each location to the central apparatus room.

Television involves a lot of people working together, which means good communication is vital. To look at what makes a good intercom system today, we need to think about what has happened in the past. We also need to acknowledge that there is no one-size-fits-all solution: different broadcasters and production companies have different requirements.

In the US most studios adopted an intercom based on a two-wire party line system, with all operators wearing headsets to communicate over limited shared channels. In part this route was chosen because it was inexpensive, which was important for the smaller, totally advertising-funded stations which predominate the market.

It caused headaches for the audio engineer, though, who had to interconnect the various operator stations, interface units and breakout boxes. It was time-consuming and had to be done carefully if the right people were to hear each other.

In Europe the picture was rather different, with working practices established by the large national broadcasters who received state funding in one form or another, like BBC, ARD and France Télévision. Here many operators preferred open microphones and loudspeakers rather than headsets, and this required careful planning to ensure clear communications without risking acoustic feedback.

Systems were generally based on a bespoke central switching matrix, connecting to stations as required. While many used microphones and speakers, there was also a need to support headsets and switched microphones, for instance for camera operators, and also wireless connectivity for floor managers and other mobile staff.

Originally developed in house, increasingly these customers turned to specialist vendors to develop the technology. That is how many of today’s well-known intercom brands – including Trilogy – came into being. Each system developed over time to add more and more sophisticated functionality to cater for specific requirements from individual broadcasters.

Digital

The early 90s saw the birth of the first digital intercom systems. These retained the architecture of a central matrix, but replaced analogue multi-core cabling with either Cat 5 or co-ax. Systems were configured, usually by plugging a PC into the matrix, which meant that new functionality could be added like dynamic adjustment of audio levels and alphanumeric displays on user panels.

These systems were also highly scalable, which meant that features once available only to the largest broadcasters became commonplace in any independent facility.

By 2000 the next growing demand was that intercoms had to be networked together, whether to production facilities on the other side of the building or to remote stations on the other side of the planet. Alongside that was the pressure, common across all the technology in the industry, for ease of maintenance and lower installation costs. People everywhere starting talking about lifecycle costs, and that applied to intercoms just as much as any other piece of broadcast equipment.

Typically, all but the largest television companies combined the intercom for multiple studios in a single matrix in a central racks area, with cabling for cameras and studio floor facilities running throughout the building. Vendors developed matrix systems which were capable of scaling up to very large sizes, and with trunk interfaces to allow connection to remote systems over the existing infrastructure.

There was a fundamental disadvantage of the central matrix architecture, though: it represented a single point of failure. There was also a cost disadvantage at the time of installation, in that dedicated cables had to be run from each location to the central apparatus room.

This gave birth to a new approach, which minimised the amount of cabling required by using smaller matrices, each serving a specific operations area. It also eliminated the single point of failure: if one area went down it did not affect the rest of the building.

Each of these area-specific matrices were interconnected with audio trunk lines and proprietary data links. These techniques, especially with the increasing use of fibre optic cabling which provided more than enough capacity, solved the interconnection issue within a station. It was not readily adaptable to multi-site applications, though.

A new design paradigm was required, which moved away from the idea of connecting each device to a central switch over a dedicated cable, whether analogue or digital.

IP

By this time the internet was ubiquitous, and its core platform the internet protocol (IP) was being adapted to all sorts of new applications. Among them was the idea of carrying packetised audio, either for telephony (voice over IP or VoIP) and for programme audio. The attraction was that traditional internal cabling and leased four-wire circuits or ISDN lines could be replaced with IP over any data bearer.

Broadcast engineers tended to dismiss the idea in the early days, saying it was not sufficiently resilient, bandwidth was expensive, and the inherent latency would be unacceptable. It also went to the heart of a growing rivalry in many organisations between broadcast engineering and IT, with the former lacking the network design skills required and the latter reluctant to let bandwidth-hogging realtime circuits onto office computer networks.

What actually happened was that the R&D resources of the vastly bigger IT and telecoms industries solved the problems for us. Whereas once bandwidth might have been an issue, now 100BaseT ethernet is routine and gigabit ethernet is becoming common in enterprise networks. With even a complex intercom needing less that 100kb/s, bandwidth is not an issue.

Enterprise quality ethernet routers now support standards for prioritised packet switching and differentiated services. That ensures a sufficient quality of service for the slice of the network devoted to realtime services like the intercom, which in turn minimises latency. In practice the latency in a modern IP intercom is not an issue.

Most important, though, is that the rise in VoIP telephony has led to open standards for communication between the device and the matrix, and between matrices. The session initiation protocol (SIP) has been adopted universally to carry the set-up data for a call. It makes it perfectly practical for multiple matrices to work as one large virtual switch. Just as today we take for granted that we can dial a number on a telephone and be connected to someone anywhere in the world, so that is now available to the broadcast intercom, provided the vendor or vendors meet these agreed open standards.

Decentralisation

In its Gemini system, Trilogy has taken this simple open connectivity and used it to develop a widely decentralised platform. It is based on 32 port matrices, eight of which can be connected together to build a 256 x 256 network. If you need more ports, then add a second network, with seamless communications between them.

Within each network of eight matrices there are two communications paths, each of which are resilient and which provide redundancy for each other, making it virtually indestructible. Each network is a ring so if the path is broken at any point then the data is simply routed the other way around the ring.

The first network allows any port to talk to any other simultaneously, so is scaled on the basis of 238 concurrent conversations. It also supports full audio quality, with 20kHz bandwidth and 16 bit 48kHz sampling, so an intercom circuit can carry programme audio if required. This results in a 270Mb/s data stream, which is carried over a dedicated high speed audio ring.

The second network is used for the signalling protocols to establish calls, and for fallback audio paths. It runs over the regular enterprise network, using standard audio codecs. A choice of codecs is provided allowing the user to choose the right balance of bit budget, latency and quality, but a good 7kHz audio signal is easily supported.

Given the data bandwidth that quality can be maintained over any circuit, as can the ability to connect at the touch of a button, with that button having a user-friendly legend. So networks can be connected over any distance, and temporary circuits can be established to remote locations. Even the standard SNG path includes space for data which can be used for an intercom signal.

Most important, this connectivity uses open standards which are universally recognised. At the end of 2011 the EBU ratified the use of SIP as the standard method of getting intercom audio over IP. When this is followed by standardised control signalling, multi-location, multi-vendor intercom networks will be simple to establish. Communication for broadcasters will finally be open and easy.

Article Search

Search
 

   cmip equinix XStream cmip  cmip 
BPL Broadcast Limited, 3rd Floor, Armstrong House, 38 Market Square, Uxbridge, Middlesex, UB8 1LH, United Kingdom | +44 (0) 1895 454 411 |  e: info@bpl-broadcast.com  | Copyright © 2014