TV Audio Consoles

IP & TDM Networking

WheatstoneNAB BANNER


Wheatstone for TV Audio: Your audio console needs to be one smart cookie to stay ahead of live production these days. Complicated controls are out. So is networking for the sake of networking. Your audio console needs to understand your board op, and its audio network needs to know its way around a live sportscast, newscast or show. That’s why our new IP networked consoles don’t require console school to learn how to use, and why the WheatNet-IP audio network includes intelligent routing, integrated control and a complete audio toolkit at every node. Basic audio tools like audio processing and mix-minus creation at every node combine with control functions so you can send the right mix-minus to a news anchor’s headset at the last minute, change news sets unexpectedly, set up IP intercoms as needed, and stay flexible – yet productive -- during any production scenario. Wheatstone offers its newer line of consoles with WheatNet-IP networking and its established line of TDM-based consoles with Gibraltar networking. Both are designed specifically for television by a company  trusted by broadcasters around the globe.

Click to download our NEW TV PRODUCTS FOR 2015 Brochure

Trends Beyond 5.1. We Take it One Track at a Time.

RBC Headshot_BIGFor those of you who are trying to balance all the latest audio trends and having a hard time adjusting the EQ in your head, we take it one track at a time in this discussion with Roger Charlesworth, the Executive Director of DTV Audio Group.

WS: Where to start? There are so many trends that television broadcasters should be paying attention to.

RC: That’s the thing. This is a period of rapid change. We started the DTV Audio Group about six years ago, as a vehicle for network operations and engineering management to share insights on the practical application of emerging television audio technology. At the time DTV loudness was a big issue. Since then, the rate at which the technological landscape is evolving has accelerated. Stuff that we thought would take decades is happening in a matter of years. For example, it’s been fairly recent that we have had these experiences with our smart phones and tablets. You can be watching a movie in your living room and get up to go to your bedroom, and you can bring that movie with you. It’s a seamless transition. Manufacturers are working hard on all this and they got some good technology. The transition to mobile is huge, but …


WS: But?

RC: But… what’s behind that is this universal transition to streaming. Whether it is streamed to a fixed device or mobile device, it all starts to look the same. And what we’ve learned is that even the cable companies are transitioning to the stream model inside the cable plant. So linear channels are going away, and they’re able to use more efficient video codecs like MPEG-4 and they’re able to roll out new services like 4K more efficiently when they’re streamed. Everything is being streamed. That means things can move faster, change can happen faster; it’s not hardware bound.

And all of this leverages core IT technology, which is the biggest trend. That is what’s accommodated things like mobile, whether it’s streamed in your living room or to your phone.

WS: We’ve heard that this NAB could be the year of IP video switchers. What do you know?

RC: Could be, because when we look at 4K and the economics of packet switching as opposed to a traditional video switcher, it just becomes so compelling. When you get up to these higher data rates the IP switches are an order of magnitude more cost efficient than traditional SDI. I think people that play in that space are going in this direction; those companies see the value. Our member networks tell us they have already started this transition and within five years pure IP video infrastructure will be the norm for the broadcast plant.

On the audio side, IP transport is pretty easy. Radio has utilized IP audio for a long time. It’s comparable to IP telephony. That seemed exotic ten years ago and now it’s ubiquitous and leverages the same sort of standards, protocols and methodologies required for IP audio.

WS: So, where are we with regard to audio over IP in the television operation?

RC: Audio IP is happening fairly fast, as you know, but I think it’s not understood how widely important it is. If you look at the Olympics and the World Cup, we’re seeing it on the contribution side. We’re seeing large scale IP networking there and I think we’ll see it in the plant.

But the thing about IP audio that gets missed is that it’s not just a transport. It’s really about control, and that it’s an easy way to automate processes. So if you want to talk to someone in IFB across the cloud, you can do that using well-established telecommunications protocols and other proven protocols. People are just beginning to pick up on this. If you have an IP routing cloud, then communications, IFB and audio signals can all be in the same routing cloud. Instead of feeding them from one system to another, they can exist in the same environment with the audio itself.

In radio that’s pretty common, as you know. The telephone interfaces are digitally delivered and controlled around the plant digitally. But for us in television, it’s really the metadata, the management and control of processes that we should be thinking about, and then the audio routing that comes along with that.

WS: We’re probably just scratching the surface of what this will mean for live production, right?

RC: Absolutely. In sports especially, we’re seeing this trend of insourcing of production. This is where some of the production is done at the venue, but there’s also the mixing and other production done back at the central studio. So instead of having your best director in the field, he or she can just drive into work and do the Detroit game from a control room in Bristol or Stamford. The economics of that is so compelling and IP is what makes that work because you can bring a whole bunch of different kinds of signals in and mix them virtually. Because the metadata can travel in the same network, you can organically create labels and markers that this is a particular microphone, this is a camera; that metadata helps us automatically create a mix. We’re seeing a little of this in college sports, and it’s probably something you’ll see more of in the universities that have a high degree of connectivity or where the production budgets are lower.

WS: Let’s switch tracks for a minute. We’ve been hearing a lot about immersive or object audio. What’s happening with that?

RC: I think object audio is coming. It’s funny. It seemed so far off, and now we already have consumer (Dolby) Atmos available and we can buy these receivers from Best Buy and Amazon. There is a certain amount of theatrical content becoming available to consumers now and I suspect folks like Amazon, HBO, Netflix, Starz and others will start streaming a wide range of Atmos material in the near future. Still, in all, I don’t know whether immersive will drive the transition to objects as much as personalization. A big part of that personalization is accessibility and to a certain extent, there’s really a crying need for better video description and multiple language support and I think that’s where objects used in a modular approach are very useful.

WS: Where are we at with object audio?

RC: Simple, static objects are useful now. For instance, we have to have a better solution for video descriptive and an object approach is very helpful. If we can have a stream of objects in the main program and we want to have video descriptive on top of it, we can still have surround sound in the program, and have the descriptive come on top of the program and that’s really great for people that are visually impaired. Same thing for people who are hearing impaired, to be able to turn up the dialog or turn down the effects, that’s where objects have enormous value. Obviously in immersive audio, being able to fly things around and being able to render in different ways, rather than having linear channels, that’s great. But I think the real value now is in more mundane things like alternate language.

WS: Let’s talk about headphones and immersive audio.

RC: Rendering headphones and virtualized surround headphones, that’s definitely something that makes immersive audio more relevant. I don’t remember what the exact stats are, but it’s something like 70 percent of content people are listening to on mobile devices, they’re listening through headphones.

It’s a cultural phenomenon.

The gamers have had headphone surround for a while. In film post-production and sound effects editing they often employ a very sophisticated surround simulating headphone system from Smyth Research called the Realiser. They cost a fortune, but these systems are incredibly accurate and track head movement, etc. So the technology to simulate surround in headphones is pretty mature. I think we’ll see these creeping into the consumer products, and that’s what will make immersive audio more relevant. It’s not just for high-end home theaters. There’s this idea that we can probably do it in headphones, too, and we can do it in virtualized soundbars -- these surround soundbars are just starting to come out, actually, and they are mind-blowing

WS: In practical terms, what does that mean to television?

RC: For regular day-to-day television, we’re still struggling with 5.1. So the answer is we will see maybe at first just height channels added, and we’ll see that on premium sports content, for example. Even having just the height helps a little bit. It lets you take some effects off the front so you can get a little more audience or crowd reaction in or say the sound of the PA in a hockey game…and maybe in the rendering for stereo you take a little of that out. I think it’s relevant. We’ll see it ultimately because program distributors and big MVPDs like Comcast always want to offer premium content, and they’re starting to push the envelope. From a production standpoint, it’s hard to take on much more though, which is why, initially, object audio will be based on the linear, 5.1 format with some enhanced personalization or immersive elements added in.

WS: Thanks, Roger, you’ve given us a lot to think about.

Roger is the Executive Director for DTV Audio Group, but don’t let his lofty title fool you. He’s a former recording engineer and he’s done more than his share of live music production for entertainment television, including work for the David Letterman, Conan O’Brien, and Jimmy Fallon shows. He shares a passion for digital audio along with other network operators and technology managers who belong to DTV Audio Group, a trade association that spans news, sports and entertainment genres and is primarily focused on practical production issues surrounding real-time programming.

Paul Picard on WheatNet-IP for Television

In this video from our friends at IABM, Wheatstone systems engineer Paul Picard talks about the WheatNet-IP Intelligent Network and its applications in television audio.

IP Audio Networking for TV... It Takes More Than a Network


WheatNet-IP is more than just an IP network that routes audio within a TV facility. It is a full system that combines a complete audio tool kit, an integrated control layer, and a distributed intelligent network that takes full advantage of IP audio.

By combining these three components seamlessly into one system, we can deliver the following:

  • A distributed network of intelligent I/O devices to gather and process audio throughout your facility
  • Control, via both hardware GPI and software logic ports, that can be extended throughout the plant as well
  • Rapid route changes via both salvos and presets to instantly change audio, mic control, and tallies between sets
  • A suite of software apps and third party devices that all communicate via the common gigabit IP interface
  • True plug 'n play scalabilty - devices are easily added to the IP network
  • Triggered crosspoints to create routable IFB throughout the facility

Just about any complication in post or live production is manageable using the highly routable features of this broadcast-proven IP audio routing and control system.

Click here to learn a LOT more about what we are doing with IP Audio Networking for TV...

Live Production Made Easy Using IP Audio Networking with Integrated Control

WNIP Toolbox2_2000Switching between video feeds can be fairly straightforward. But switching audio feeds and creating an intercom between field reporters and the studio, not to mention setting mix-minuses – all of this can be cumbersome and time consuming. And that is where a new breed of digital mixing consoles with IP audio networking and integrated control can make all the difference.




Wheatstone wins TWO NAB TV Tech Best of Show Awards!

TVT Award 300

At this year's NAB we've introduced a new concept for IP Networked Audio for live and production TV. We've launched our new flagship TV Audio console, the IP-64 along with our Gibraltar IP Mix Engine. Both break new ground by combining intelligent IP Networking with integrated tools and control layer to provide capabilities never before seen in the TV audio world. TV Technology saw fit to give both the IP-64 and the Gibraltar IP Mix Engine their coveted BEST OF SHOW awards!

Wheatstone’s new IP-64 large-format digital mixing console with IP networking is an excellent example of what this leading manufacturer of broadcast studio equipment is known for: a solid, intuitive console that doesn’t require a week of console school to learn how to operate.

The new Gibraltar IP Mix Engine provides Wheatstone’s line of audio consoles with direct connectivity into WheatNet-IP, an AES67 compatible IP audio network with all the necessary broadcast audio tools and controls integrated into one robust, distributed network.



LV LastMin 420Wheatstone At NAB: Booth C755

Arrived on Friday - some VERY special boxes containing something we've been working on right up to the last minute.
Make sure you visit Wheatstone/Audioarts in Booth C755. We're in Las Vegas. We will be ready and mighty happy to see you walking into the booth!
A few more images from our first day on the show floor:

View the embedded image gallery online at:

Want to see more? The full photo galleries are here, updated as the show goes on: NAB 2015 Photos

AES67 ‘Another Arrow in the Quiver’

Phil Better Shot

By Steve Harvey/TV Technology

Wheatstone’s Phil Owens discusses the advantages of audio-over-IP

LOS ANGELES—As computer scientist Andrew Tanenbaum famously observed in the mid-1990s, “The nice thing about standards is that you have so many to choose from.” Fast forward two decades and the audio industry is poised to implement a new standard, AES67, which is intended to allow the exchange of data between disparate IP audio networks.
The AES established a working group in 2010, designated X-192, after recognizing that various incompatible audio networking schemes were already in existence. The eventual standard, AES67-2013 (published in September 2013), enables interoperable high-performance streaming audio-over-IP, or AoIP.

“Our take on AES67 is that we welcome it; we have made it so that our system is compatible with it,” said Phil Owens, head of Eastern U.S. Sales for Wheatstone in New Bern, N.C. But, he added, “We’re thinking of it as ‘another arrow in the quiver.’”

As Owens noted, there are already numerous ways to get in and out of WheatNet-IP, Wheatstone’s Gigabit Ethernet network, and AES67 can now be added to that list. “A person can put together a Wheatstone network that includes I/O from playout devices equipped with our software drivers as well as analog, AES, MADI, HD-SDI and—now— AES67. It’s equivalent to the blue M&M that everyone is talking about, because it’s new. But if you step back, it’s just another way to get in and out of our system.”


WheatNet-IP, as a layer 2 protocol, takes full advantage of the economy of scale and the resources of the massive IT industry, which means that cost and reliability need not be a barrier to adoption. “Our audio will travel through an off-the-shelf switch that you can buy at Staples,” Owens stressed.

Plus, he noted, IT equipment manufacturers have long since developed mechanisms ensuring mission-critical reliability. “Redundancy is built into your network topology in the way that you connect your switches and the network protocols that you implement. So we’re not only taking advantage of the cost effectiveness of standard space switches, but we’re taking advantage of the IT capability that’s been built up over the years to handle large networks.”

Further, while older audio technologies required special optical network cards to access fiber when distances exceeded the capabilities of Ethernet, now, “Even a smaller Cisco switch has SFPs [small form-factor pluggable] for fiber transceivers. Laying down a switch on each end with fiber between them is child’s play,” said Owens.



AoIP has a significant advantage in capacity over another audio transport standard, AES10 or MADI, which has enjoyed something of a resurgence since its original publication in 1991. “We did a test recently to see how many audio channels we could cram down a Gigabit pipe using our system,” Owens reported. “The number turned out to be 428. You’re moving from 64 channels of MADI to over 400 channels using Gigabit IP audio. That’s quite an increase in possible functionality.”

WheatNet-IP I/O devices are known as BLADEs, continued Owens, and enable various combinations of the supported connections to be introduced to the network. But Wheatstone’s network offers much more than just audio transport.

“In addition to getting I/O capability out of a BLADE you get other functions, like background mixing, audio processing, silent sensing and signal processing, the ability to do automated switching based on the status of a certain cross-point or based on silence detection,” he elaborated. “It gives you a Swiss Army knife of tools that you can use. You not only get a distributed audio network where you can deploy these BLADEs wherever audio is needed, but you get a whole toolkit of functions that are very handy in the audio environment.”

That list of functions is very appealing to engineers, whether in television or radio, according to Owens. “We probably have 50 TV stations that are running BLADEs,” he said. “The reason is that they like that list, and the ability to plop a BLADE down in their newsroom and be able to send audio out to it and get audio from it.

“There are two major TV groups in the country that between them probably own more stations than anyone else that have standardized on IP and our BLADE system. They’ve done it for the reasons that I mentioned— they like that distributed control and audio gathering in a system where you can also put a stack of BLADEs in your tech center and have the equivalent of a large mainframe router.”


Perhaps the most important benefit of AES67 will be the “network effect,” otherwise known as Metcalfe’s law. The term, initially applied to the telecommunications industry, recognizes that the value of a network is greater than the sum of its parts once it reaches a critical mass of users.

“There will be peripherals that could add functionality to our system,” Owens acknowledged. “For that reason we feel good about it and that’s why we incorporated it into our latest BLADE release, version 3, which we introduced at the end of last year.”

Yet, as AES67 currently stands, those third-party peripherals can be integrated only so far, since the standard lacks a control protocol, such as that incorporated into WheatNet-IP. “[AES67] is not going to come into its own until the control part gets added,” said Owens. To that end, an AES working group, X-210, was formed in late 2012 to develop a standardized protocol based on Open Control Architecture, the open standard communications protocol.

Ultimately, AES67 offers an insurance policy, said Owens, but it’s no substitute for a comprehensive system such as WheatNet-IP. “People will feel better about implementing a system that they know conforms to standards,” he said. “But it will never be the be-all and end-all of audio connections for the main audio gear, because that’s what we’ve all worked so hard on when we created our individual systems. Wheatstone has a top to bottom integrated system that includes consoles, routers, and peripherals that interoperate seamlessly over IP, but we can also envision a day when we will use a truly boundaryless network of things. AES67 is a move in that direction.”

Reproduced with permission from TV Technology

Loudness Control: 3 Things to Watch

DimensionThreeLoudnessHere are three critical things to watch on the mixing desk and what you need to know about them for effective loudness control.

1. VU indicator. The VU meter (now in digital bar graph form) has been around for 80 years for a reason. It’s predictable, with predictable integration times and predictable release times so you can predictably read volume units. Remember it is an averaging meter and the peaks are far higher than indicated. For this reason you can expect to have about 20dB of audio headroom above 0dBVU to encompass them.

2. Peak level indicator to read the transient peaks of the signal. This indicator tells you if peak levels are in danger of overloading the dynamic headroom limitations of the console. The clipping point is usually at 0dBFS. “Clipping equals distortion, so don’t go there unless you absolutely have to. Stay within a reasonable gain structure that is not going to cause distortion,” says Steve Dove, Wheatstone Minister of Algorithms. Peak signal levels run usually at or above -20dBFS, with transient peaks kicking up to about -6dBFS occasionally.


3. Loudness indicator for compliance with the ITU BS.1770-3 and similar television loudness standards. This indicator came about initially in response to the need to assess and regulate the loudness of adverts compared to regular programming. The Loudness Unit Full Scale (LUFS) or Loudness K-weighted Full Scale (LKFS) measurement shows the averaged loudness level of audio over time, usually much longer than that of a VU meter. “Ideally, you should measure at a very long integration time (30 seconds), because that would be most accurate. But if you need to know fairly quickly if you’re going to be over the top or too far under, then you might want to go for a shorter integration time of, say, 3 or 10 seconds,” says Dove. The average loudness target level is -24 LKFS or -23 LUFS, depending on your location. By the way, you can’t miss this on a Wheatstone audio console – the LKFS/LUFS numbers are two inches high on the display screen. One LU (loudness unit) is equivalent to 1dB, so there's a direct correlation between how far the meter says you're over/under and how far you move a fader to compensate.

The Scoop on Codecs for IP Audio

CodecIllustrationUsing the Internet for audio distribution makes sense, but the problem is a little like the holiday rush at the Post Office.

There are simply too many packets of data for the pipeline.


Live and Vocal Part One: First, Get It Sounding Right

by Steve Dove
Wheatstone Minister of Algorithms

Steve DoveThere's a big difference between what it takes to get live voice straight to air, and what the sound engineer needs to do for audio that will be post-produced. In the latter case, it’s always a good idea to just concentrate on getting it all down cleanly with good consistent levels and minimal processing. The boys in post-production will definitely not thank you if they have to try and unwind heavy EQ you wound in, or deal with irreversible deep compression.

First, go into the studio and hear what they actually sound like, both their normal conversational voices and their "on" persona. This is your target, not some arbitrary notion of what they ought to sound like.

Microphone techniques in TV are, charitably, non-optimal and driven by the visuals. Now, tie-clip mics actually sound a lot better than we have a right to expect but they are (usually) omnidirectional, poorly located on the chest, tend to hear a lot we rather they didn't, and what they do hear is colored. Over the years the mic manufacturers have attempted to mitigate their shortcomings, but there's still work you can do with tools to hand. A modern digital TV audio console such as a Wheatstone console has all the necessary audio tools on board, equal to or superior to those found in the best recording plug-ins and the like. They're there for good reason, so let’s get started.

Read More

Live and Vocal Part Two: Now, Get It Sounding Great

by Steve Dove
Wheatstone Minister of Algorithms

SteveDove Alt 200

The most basic, and arguably the most powerful, tool for getting vocals to sound good is equalization. It has two primary uses, to correct for errors or for artistic effect. Compression and limiting also can be useful for adjusting vocals, as I cover in some detail below.

Read More

It’s a MAD, MAD, MADI World

MADI MADI MADI WORLD 350Among its many uses, MADI can act as a common transport mechanism between two systems that use different native formats. We have a MADI interface that seamlessly integrates the WheatNet-IP audio network into an existing Wheatstone TDM router system so you can have the best of all worlds!

Who can tell us what MADI stands for? Anyone?

We hear crickets...

But, don’t lose track of how useful MADI can be to broadcasters. The list is fairly long, and getting longer. After all, there are very few alternatives for sending up to 64 channels of digital audio (48kHz sample) over one 75-ohm coaxial cable. Not only does this digital audio routing standard by AES make it possible to send a lot of channels through hundreds of feet of cable, it delivers lossless audio through all those channels. That lends itself to some practical applications.

Read More