TV Audio Consoles

IP & TDM Networking

WheatstoneNewTV2015Brochure

Wheatstone for TV Audio: Your audio console needs to be one smart cookie to stay ahead of live production these days. Complicated controls are out. So is networking for the sake of networking. Your audio console needs to understand your board op, and its audio network needs to know its way around a live sportscast, newscast or show. That’s why our new IP networked consoles don’t require console school to learn how to use, and why the WheatNet-IP audio network includes intelligent routing, integrated control and a complete audio toolkit at every node. Basic audio tools like audio processing and mix-minus creation at every node combine with control functions so you can send the right mix-minus to a news anchor’s headset at the last minute, change news sets unexpectedly, set up IP intercoms as needed, and stay flexible – yet productive -- during any production scenario. Wheatstone offers its newer line of consoles with WheatNet-IP networking and its established line of TDM-based consoles with Gibraltar networking. Both are designed specifically for television by a company  trusted by broadcasters around the globe.

Click to download our NEW TV PRODUCTS FOR 2015 Brochure

Notes from an ATSC 3.0 Boot Camp Survivor

dave-gDave Guerrero is the VP of Technical Services for PBS 39 in Bethlehem, Pennsylvania, although he’s served under just about every title in the broadcast business, from production engineer to GM for Videotek, the Test and Measurement Group within Harris Broadcast. Here are his astute observations on cloud storage, immersive audio, single frequency networks and other cool developments that could result from ATSC 3.0.

WS: ATSC 3.0 boot camp. That sounds serious.

DG: Yeah. They just released the first physical layer working draft document called ‘System Discovery and Signaling’ that defines the system boot strap, the first piece that says how this new ATSC will connect to the TV set or your iPad and defines what’s in the broadcast stream. This meeting was all about that, and about the other layers or information that will be carried. This was the annual technical boot camp and broadcast meeting, actually, and there were a number of broadcasters representing all facets of the industry and getting together to talk about the standards. The premise of these meetings is to determine that the standards provide an avenue from today’s operations via the evolving technology that affords the next generation of broadcast business opportunities, therefore ensuring our future as broadcasters. The last thing any of us want is to create a ‘next-gen’ technology that puts a never-ending demand on CAP-EX expenditures; assuming the future broadcast business will require new solutions to viewer demands related to the means that they consume content.

Read More...

WS: ATSC 3.0 is essentially about IP, I take it.

DG: Essentially, yes. A big part of the standard is the ability to produce a stream as data, or IP. We’re switching to IP like everything else. TODAY…..How do I get to DirectTV? IP. How do I get files? IP. How do I get DISH Network? IP. How do I get to the cable company? IP. How do I get over the air? That’s different, that’s RF. But ATSC 3.0 is going to converge our digital broadcast streams into IP. The results of these developments will increase the effective bandwidth from 19.4 Mb/s to at least 24 Mb/s (significantly more could be possible from optimized installations) in a 6 Mhz channel. Due to the use of new compression algorithms and so forth, you’re dealing with a system capable of supplying more video content as well as other streaming services. Many station groups have not determined what new business opportunities can be had with this system. I don’t know exactly what my station will want to do with it – but it is great to know that we now will have new opportunities to support the future needs of digital media in our community! We might want one picture or six pictures and/or other services related to the programs …... or maybe not. About the only thing we know for sure: this means more content being produced and transmitted.

WS: ATSC 3.0 isn’t strictly about audio and video as we know it, is it?

DG: Not really. It’s about files and information in the form of data. It’s all about real-time streaming and how to get data from the broadcast station to the viewer, that’s the gist of it. It’s not really about video and then audio happens to be wrapped around it. I, as a broadcaster, am now thinking about what I’m putting on the stream that somebody can pick up on a phone or a smart TV set. So a station can stream video out like normal, but included in the container of the broadcast emission can be a stream of alternate video or more information related to the broadcast..… or related to your webpage on an iPad App that pops up as needed, either as a picture or a sound or as textual information. The problem with all this cool stuff is that it’s not going to be 100 percent backward compatible. That means all the TVs today will need some sort of black box gadget to work, which could be in the form of a little box (stick) that plugs into an HDMI (or USB) port.

WS: But there’s going to be more audio channels as a result of ATSC 3.0…

DG: It’s a lot. Let’s start with 7.1 plus 4 channels of audio (12 ‘channels’), which will be compatible with 4K and eventually 8K – that super surround sound we all talk about. Think of the availability of 40 different sound sources, which are rendered by the viewer’s receiving equipment, as determined by a profile that the viewer set up per broadcast program. Every time I think about it I wonder where I’ll put 12 speakers in my living room! But, it’s probably not going to be like that, there are new speaker arrays that can project the overhead sounds from the side speakers or from a bar (more new technology..). One of the demonstrations I saw recently was about this idea of overhead sound, so say you’re watching a movie and you see a person on the top of a building. We’ll be able to pan his voice so it sounds like it’s coming from above. There’s also this idea of the dialog being a completely separate track, mixed in the receiving device. This allows the viewer to actually adjust the mix of what he is hearing; international sound versus dialog -- choose different languages with the same international sounds or as an audio object that can be added in to the mix, various point-of-view music, that sort of thing. I doubt we even know all the uses yet.

WS: I suppose the thing we know for sure is that it will change how much and how production is done.

DG: True. We really need to produce everything in IP, even audio. PBS 39 is almost 100 percent file based. Everything comes in FTP or some other file transfer, except for audio production. It would be nice if audio files could go right into the editor along with the video files. Think about a typical magazine show that we do. We set up some mics with a host and a couple of people talking, and what do we do? We mix them together in the editing room and we then take that AES output and embed it into the mix video. That works fine, until I look at my workflow and say, ‘you know, my camera is IP so I can send that IP stream right into my server, and my on-air server can play that out to my multiplexer and to my transmitter and all I need is the audio?’ See, my server handles IP and audio is a separate stream. So why can’t the audio that I get from my edit room go right into the server (via IP) instead of AES?

 

WS: Well, as you know, we’re all for that. In your case, you could probably keep your Gibraltar (TDM) network and just add a Wheatstone BLADE (IP audio node) to make that conversion to IP. DG: That’s exactly what we’re talking about doing. This adds the flexibility of moving audio as either analog, AES, embedded or IP… I keep going back to the thought of how we are going to supply a number of audio channels with the physical constraint of a cable per channel. The simple solution is audio over IP, addressable by the transmission hardware and accessible by the viewer in a simple to use interface.

WS: We’re curious about what ATSC 3.0 will mean as far as cloud storage and the like?

DG: It’s probably going to make it a lot easier to centralize content. Like in our case, almost half of what we air as PBS stations (like most broadcast networks) is the same content. With this IP model, you can put all content (web, mobile or broadcast) in one place and all network stations can download and playback programming when they want it. That’d eliminate a lot of local storage – and cost. I’ve also thought that ATSC 3.0 might make it possible for all of us to connect stations in a single frequency network, similar to the European DVB-T2 standard. This would allow a larger area to be served by one frequency, although carrying the broadcast content from a number of geographically collocated stations. These are the things that could really change our business models in the next couple of years, and it’s within reach and will happen not in my grandchildren’s time but much, much sooner.

WS: Interesting times for all of us, that’s for sure.

DG: Yep. Everyone wants to own their own little world. That’s the old way. The new way is I just want to control what I’m producing. Now it’s all in the cloud – that’s the concept. The concept is that I don’t have to own 15 million dollars’ worth of equipment to get it here.

WS: Thanks, Dave.

Editor’s note: WLVT-TV’s studio facility is networked using Wheatstone’s Gibraltar TDM audio router system.

Andy Calvanese Discusses WheatNet-IP for Television

AndyWNIP TV thumb 670

Wheatstone's VP/Technology, Andy Calvanese, discusses some of the advantages of the seamless, built-in control layer of the WheatNet-IP audio-over-IP network when used in television applications.

Oregon State's WheatNet-IP TV Audio System

TV NEWSCHECK OREGONfull

A $3.1 million, 14,000-square-foot media center allows students at Oregon State University in Corvallis, Ore., to get hands-on experience producing content for television, FM radio, the Web, social media, newspaper and magazine.  This state-of-the-art operation could serve as a model for broadcasters looking to integrate media. Shown in the photo is Control Room A.
(
Photo: Erik Utter Associates) 

Read More...

By This email address is being protected from spambots. You need JavaScript enabled to view it.

TVNewsCheck

Memorial Day weekend is usually an exciting time for students wrapping up their spring semesters and heading out for summer vacation, but this year’s holiday weekend was particularly stirring for media-savvy students at Oregon State University in Corvallis, Ore.

That’s because Friday, May 22, marked the inauguration of TV production at a new 14,000-square-foot media center located on the fourth floor of the also new Student Experience Center near the university's student union.

The modern, multifaceted media center is a largely open, collaborative environment where students will get hands-on experience producing content for television, FM radio, the Web, social media, newspaper and magazines that could serve as a model for broadcasters looking to integrate media in a collegial environment.

The inaugural production, a 25-minute live preview of one of the musical groups scheduled to perform at this weekend’s Oregon State Battle of the Bands, was a test-drive before the $3.1 million facility shifts into high gear for the fall semester.

“It was an opportunity to pull the Ferrari out of the garage and take it for a spin around the block,” says Bill Gross, assistant director of the Orange Media Network, which operates all the university's media.

“We wanted our seniors to put their signature touch on the facility so they could claim they did the first live show out of the new facility.”

The new facility replaces a surplus dormitory purchased from student housing in the 1970s for media operations. “Essentially, it was a rabbit’s warren of small rooms,” says Michael Henthorne, executive director of Memorial Union and Educational Activities at the university.

“We are shifting from what traditionally has been an individualized organizational structure for each product to now being more of a single news organization with multiple means of reaching its clientele,” he says.

The facility consists of two studios and two control rooms as well as a common newsroom where student journalists working on stories for their newscast can collaborate with their colleagues working on content for KBVR-FM, the student newspaper, quarterly magazine, website and social media. The retooled KBVR-TV, which has been on a bit of a hiatus from its regular program schedule while relocating to the new digs, will begin producing a live newscast five nights a week as part of its 24/7 program lineup on a community Public, Educational and Government Access Channel provided by Comcast, says Gross.

Besides news, KBVR-TV airs public affairs, variety and music shows, educational programming and sports. The station also simulcasts online as a live Internet stream.

The facility was designed to make it easy to reconfigure and share production equipment, says Erik Utter, director of engineering and president of Erik Utter Associates, the Seattle-based video engineering and consulting firm responsible for its planning. For instance, the six Grass Valley LDX HD studio cameras — one of which is on a crane — can be moved between studios or broken down and transported for live productions from around the campus, he says.

Similarly, the three M/E busses of the Ross Video Acuity production switcher can be shared between the new facility’s two production control rooms, each of which is equipped with an Acuity control surface. Grass Valley Kaleido multiviewers also can be shared and reconfigured on the fly to display every video source in the facility as needed, Utter says.

Another example of the facility’s flexibility is the Wheatstone WheatNet audio-over-IP network. “They are completely reconfiguring their resources depending upon whether the production is a newscast, a music production or a production in concert with the FM station,” Utter says.

“They are reconfiguring that on a production-by-production basis, so the audio-over-IP was absolutely critical to having that ability.” Rounding out the lineup of news production technology in the control rooms are a Wheatstone Dimension Three audio console and Ross Video Xpression graphics and titling.

HD-SDI video is routed to the control rooms and throughout the facility via an 80-by-80 Grass Valley NVision routing switcher. Eight PTZ remote-controlled cameras provide live shots from the newsroom, rooftop, radio studios and elsewhere around the facility. Master control playout is handled by a Tightrope Media Systems server and a Ross Video MC1 master control system. Live video streams are encoded on Elemental servers.

A large, open area on the fourth floor takes the place of separate newsrooms for each medium. The common work area, dubbed “the bullpen” by students, offers a large media lab for video editing on Apple Final Cut Pro; lounge seating for spontaneous editorial meetings; and 30 Ross Video Inception newsroom computer system seats for assignment editors, reporters and producers.

Newscasts are to be run out of the control rooms under MOS control from Inception, and reporters in the field have access to the newsroom system on their laptops and mobile devices via a virtual private network, Utter says.

Inception was a good fit for the facility because it was conceived as a single software application supporting social media, online, TV, radio and print, not simply as a TV news system with Web and print modules bolted on, he says.

“It has very simplified publishing tools to publish to TV, print, Facebook, the Web or whatever.”

Inception is tied into a new Oregon State EditShare media asset management system, which is used not only by the Orange Media Network but also the athletic department and campus media services, says Gross. For KBVR-TV, the MAM provides eight channels of studio playout and recording. For ENG, students will shoot stories with eight new Sony NX-5 HD camcorders as well as with eight existing Canon EOS Rebel DSLR cameras, Gross says. Ross Video’s Inception Social Media Management will tie social media into newscasts and other programs by enabling live Facebook and Twitter polling to generate Xpression graphics.

Support for social media was a must-have requirement for the paid student managers of Orange Media Network who had a major hand in designing the new facility, Utter says.

Unlike many other university media operations, funding for the Oregon State media facility as well as the $42 million Student Experience Center comes from student activity fees. In 2010, a student referendum to fund the project passed by nearly a 3-to-1 margin, according to Henthorne.

While the annual turnover of student managers during the three-year design phase was a bit of a challenge, Utter says any drawbacks were more than offset by the fresh perspective and effort students brought to the process. That’s not surprising because not only do they have skin in the game, but many also are motivated to produce media with tools that will make them more marketable after graduation.

“Students want a place to hone their skills, to collaborate and leave the institution with state-of-the-art experience,” Henthorne says.

To stay up to date on all things tech, follow Phil Kurz on TVNewsCheck’s Playout tech blog here. And follow him on Twitter: @TVplayout.

 

Trends Beyond 5.1. We Take it One Track at a Time.

RBC Headshot_BIGFor those of you who are trying to balance all the latest audio trends and having a hard time adjusting the EQ in your head, we take it one track at a time in this discussion with Roger Charlesworth, the Executive Director of DTV Audio Group.

WS: Where to start? There are so many trends that television broadcasters should be paying attention to.

RC: That’s the thing. This is a period of rapid change. We started the DTV Audio Group about six years ago, as a vehicle for network operations and engineering management to share insights on the practical application of emerging television audio technology. At the time DTV loudness was a big issue. Since then, the rate at which the technological landscape is evolving has accelerated. Stuff that we thought would take decades is happening in a matter of years. For example, it’s been fairly recent that we have had these experiences with our smart phones and tablets. You can be watching a movie in your living room and get up to go to your bedroom, and you can bring that movie with you. It’s a seamless transition. Manufacturers are working hard on all this and they got some good technology. The transition to mobile is huge, but …

Read More...

WS: But?

RC: But… what’s behind that is this universal transition to streaming. Whether it is streamed to a fixed device or mobile device, it all starts to look the same. And what we’ve learned is that even the cable companies are transitioning to the stream model inside the cable plant. So linear channels are going away, and they’re able to use more efficient video codecs like MPEG-4 and they’re able to roll out new services like 4K more efficiently when they’re streamed. Everything is being streamed. That means things can move faster, change can happen faster; it’s not hardware bound.

And all of this leverages core IT technology, which is the biggest trend. That is what’s accommodated things like mobile, whether it’s streamed in your living room or to your phone.

WS: We’ve heard that this NAB could be the year of IP video switchers. What do you know?

RC: Could be, because when we look at 4K and the economics of packet switching as opposed to a traditional video switcher, it just becomes so compelling. When you get up to these higher data rates the IP switches are an order of magnitude more cost efficient than traditional SDI. I think people that play in that space are going in this direction; those companies see the value. Our member networks tell us they have already started this transition and within five years pure IP video infrastructure will be the norm for the broadcast plant.

On the audio side, IP transport is pretty easy. Radio has utilized IP audio for a long time. It’s comparable to IP telephony. That seemed exotic ten years ago and now it’s ubiquitous and leverages the same sort of standards, protocols and methodologies required for IP audio.

WS: So, where are we with regard to audio over IP in the television operation?

RC: Audio IP is happening fairly fast, as you know, but I think it’s not understood how widely important it is. If you look at the Olympics and the World Cup, we’re seeing it on the contribution side. We’re seeing large scale IP networking there and I think we’ll see it in the plant.

But the thing about IP audio that gets missed is that it’s not just a transport. It’s really about control, and that it’s an easy way to automate processes. So if you want to talk to someone in IFB across the cloud, you can do that using well-established telecommunications protocols and other proven protocols. People are just beginning to pick up on this. If you have an IP routing cloud, then communications, IFB and audio signals can all be in the same routing cloud. Instead of feeding them from one system to another, they can exist in the same environment with the audio itself.

In radio that’s pretty common, as you know. The telephone interfaces are digitally delivered and controlled around the plant digitally. But for us in television, it’s really the metadata, the management and control of processes that we should be thinking about, and then the audio routing that comes along with that.

WS: We’re probably just scratching the surface of what this will mean for live production, right?

RC: Absolutely. In sports especially, we’re seeing this trend of insourcing of production. This is where some of the production is done at the venue, but there’s also the mixing and other production done back at the central studio. So instead of having your best director in the field, he or she can just drive into work and do the Detroit game from a control room in Bristol or Stamford. The economics of that is so compelling and IP is what makes that work because you can bring a whole bunch of different kinds of signals in and mix them virtually. Because the metadata can travel in the same network, you can organically create labels and markers that this is a particular microphone, this is a camera; that metadata helps us automatically create a mix. We’re seeing a little of this in college sports, and it’s probably something you’ll see more of in the universities that have a high degree of connectivity or where the production budgets are lower.

WS: Let’s switch tracks for a minute. We’ve been hearing a lot about immersive or object audio. What’s happening with that?

RC: I think object audio is coming. It’s funny. It seemed so far off, and now we already have consumer (Dolby) Atmos available and we can buy these receivers from Best Buy and Amazon. There is a certain amount of theatrical content becoming available to consumers now and I suspect folks like Amazon, HBO, Netflix, Starz and others will start streaming a wide range of Atmos material in the near future. Still, in all, I don’t know whether immersive will drive the transition to objects as much as personalization. A big part of that personalization is accessibility and to a certain extent, there’s really a crying need for better video description and multiple language support and I think that’s where objects used in a modular approach are very useful.

WS: Where are we at with object audio?

RC: Simple, static objects are useful now. For instance, we have to have a better solution for video descriptive and an object approach is very helpful. If we can have a stream of objects in the main program and we want to have video descriptive on top of it, we can still have surround sound in the program, and have the descriptive come on top of the program and that’s really great for people that are visually impaired. Same thing for people who are hearing impaired, to be able to turn up the dialog or turn down the effects, that’s where objects have enormous value. Obviously in immersive audio, being able to fly things around and being able to render in different ways, rather than having linear channels, that’s great. But I think the real value now is in more mundane things like alternate language.

WS: Let’s talk about headphones and immersive audio.

RC: Rendering headphones and virtualized surround headphones, that’s definitely something that makes immersive audio more relevant. I don’t remember what the exact stats are, but it’s something like 70 percent of content people are listening to on mobile devices, they’re listening through headphones.

It’s a cultural phenomenon.

The gamers have had headphone surround for a while. In film post-production and sound effects editing they often employ a very sophisticated surround simulating headphone system from Smyth Research called the Realiser. They cost a fortune, but these systems are incredibly accurate and track head movement, etc. So the technology to simulate surround in headphones is pretty mature. I think we’ll see these creeping into the consumer products, and that’s what will make immersive audio more relevant. It’s not just for high-end home theaters. There’s this idea that we can probably do it in headphones, too, and we can do it in virtualized soundbars -- these surround soundbars are just starting to come out, actually, and they are mind-blowing

WS: In practical terms, what does that mean to television?

RC: For regular day-to-day television, we’re still struggling with 5.1. So the answer is we will see maybe at first just height channels added, and we’ll see that on premium sports content, for example. Even having just the height helps a little bit. It lets you take some effects off the front so you can get a little more audience or crowd reaction in or say the sound of the PA in a hockey game…and maybe in the rendering for stereo you take a little of that out. I think it’s relevant. We’ll see it ultimately because program distributors and big MVPDs like Comcast always want to offer premium content, and they’re starting to push the envelope. From a production standpoint, it’s hard to take on much more though, which is why, initially, object audio will be based on the linear, 5.1 format with some enhanced personalization or immersive elements added in.

WS: Thanks, Roger, you’ve given us a lot to think about.

Roger is the Executive Director for DTV Audio Group, but don’t let his lofty title fool you. He’s a former recording engineer and he’s done more than his share of live music production for entertainment television, including work for the David Letterman, Conan O’Brien, and Jimmy Fallon shows. He shares a passion for digital audio along with other network operators and technology managers who belong to DTV Audio Group, a trade association that spans news, sports and entertainment genres and is primarily focused on practical production issues surrounding real-time programming.

Paul Picard on WheatNet-IP for Television


In this video from our friends at IABM, Wheatstone systems engineer Paul Picard talks about the WheatNet-IP Intelligent Network and its applications in television audio.

IP Audio Networking for TV... It Takes More Than a Network

THREE THINGSTV-Flowchart 1d

WheatNet-IP is more than just an IP network that routes audio within a TV facility. It is a full system that combines a complete audio tool kit, an integrated control layer, and a distributed intelligent network that takes full advantage of IP audio.

By combining these three components seamlessly into one system, we can deliver the following:

  • A distributed network of intelligent I/O devices to gather and process audio throughout your facility
  • Control, via both hardware GPI and software logic ports, that can be extended throughout the plant as well
  • Rapid route changes via both salvos and presets to instantly change audio, mic control, and tallies between sets
  • A suite of software apps and third party devices that all communicate via the common gigabit IP interface
  • True plug 'n play scalabilty - devices are easily added to the IP network
  • Triggered crosspoints to create routable IFB throughout the facility

Just about any complication in post or live production is manageable using the highly routable features of this broadcast-proven IP audio routing and control system.

Click here to learn a LOT more about what we are doing with IP Audio Networking for TV...

WheatstoneNAB BANNER

Live Production Made Easy Using IP Audio Networking with Integrated Control

WNIP Toolbox2_2000Switching between video feeds can be fairly straightforward. But switching audio feeds and creating an intercom between field reporters and the studio, not to mention setting mix-minuses – all of this can be cumbersome and time consuming. And that is where a new breed of digital mixing consoles with IP audio networking and integrated control can make all the difference.

 

 

Read more...

Wheatstone wins TWO NAB TV Tech Best of Show Awards!

TVT Award 300

At this year's NAB we've introduced a new concept for IP Networked Audio for live and production TV. We've launched our new flagship TV Audio console, the IP-64 along with our Gibraltar IP Mix Engine. Both break new ground by combining intelligent IP Networking with integrated tools and control layer to provide capabilities never before seen in the TV audio world. TV Technology saw fit to give both the IP-64 and the Gibraltar IP Mix Engine their coveted BEST OF SHOW awards!

Wheatstone’s new IP-64 large-format digital mixing console with IP networking is an excellent example of what this leading manufacturer of broadcast studio equipment is known for: a solid, intuitive console that doesn’t require a week of console school to learn how to operate.

The new Gibraltar IP Mix Engine provides Wheatstone’s line of audio consoles with direct connectivity into WheatNet-IP, an AES67 compatible IP audio network with all the necessary broadcast audio tools and controls integrated into one robust, distributed network.

 

 

LV LastMin 420Wheatstone At NAB: Booth C755

Arrived on Friday - some VERY special boxes containing something we've been working on right up to the last minute.
Make sure you visit Wheatstone/Audioarts in Booth C755. We're in Las Vegas. We will be ready and mighty happy to see you walking into the booth!
A few more images from our first day on the show floor:

View the embedded image gallery online at:
http://www.wheatstone-tv.com/#sigProGalleria5b7091bd62

Want to see more? The full photo galleries are here, updated as the show goes on: NAB 2015 Photos

AES67 ‘Another Arrow in the Quiver’

Phil Better Shot

By Steve Harvey/TV Technology

Wheatstone’s Phil Owens discusses the advantages of audio-over-IP

LOS ANGELES—As computer scientist Andrew Tanenbaum famously observed in the mid-1990s, “The nice thing about standards is that you have so many to choose from.” Fast forward two decades and the audio industry is poised to implement a new standard, AES67, which is intended to allow the exchange of data between disparate IP audio networks.
The AES established a working group in 2010, designated X-192, after recognizing that various incompatible audio networking schemes were already in existence. The eventual standard, AES67-2013 (published in September 2013), enables interoperable high-performance streaming audio-over-IP, or AoIP.

“Our take on AES67 is that we welcome it; we have made it so that our system is compatible with it,” said Phil Owens, head of Eastern U.S. Sales for Wheatstone in New Bern, N.C. But, he added, “We’re thinking of it as ‘another arrow in the quiver.’”

As Owens noted, there are already numerous ways to get in and out of WheatNet-IP, Wheatstone’s Gigabit Ethernet network, and AES67 can now be added to that list. “A person can put together a Wheatstone network that includes I/O from playout devices equipped with our software drivers as well as analog, AES, MADI, HD-SDI and—now— AES67. It’s equivalent to the blue M&M that everyone is talking about, because it’s new. But if you step back, it’s just another way to get in and out of our system.”

Read More...

WheatNet-IP, as a layer 2 protocol, takes full advantage of the economy of scale and the resources of the massive IT industry, which means that cost and reliability need not be a barrier to adoption. “Our audio will travel through an off-the-shelf switch that you can buy at Staples,” Owens stressed.

Plus, he noted, IT equipment manufacturers have long since developed mechanisms ensuring mission-critical reliability. “Redundancy is built into your network topology in the way that you connect your switches and the network protocols that you implement. So we’re not only taking advantage of the cost effectiveness of standard space switches, but we’re taking advantage of the IT capability that’s been built up over the years to handle large networks.”

Further, while older audio technologies required special optical network cards to access fiber when distances exceeded the capabilities of Ethernet, now, “Even a smaller Cisco switch has SFPs [small form-factor pluggable] for fiber transceivers. Laying down a switch on each end with fiber between them is child’s play,” said Owens.

 

CAPACITY ADVANTAGE

AoIP has a significant advantage in capacity over another audio transport standard, AES10 or MADI, which has enjoyed something of a resurgence since its original publication in 1991. “We did a test recently to see how many audio channels we could cram down a Gigabit pipe using our system,” Owens reported. “The number turned out to be 428. You’re moving from 64 channels of MADI to over 400 channels using Gigabit IP audio. That’s quite an increase in possible functionality.”

WheatNet-IP I/O devices are known as BLADEs, continued Owens, and enable various combinations of the supported connections to be introduced to the network. But Wheatstone’s network offers much more than just audio transport.

“In addition to getting I/O capability out of a BLADE you get other functions, like background mixing, audio processing, silent sensing and signal processing, the ability to do automated switching based on the status of a certain cross-point or based on silence detection,” he elaborated. “It gives you a Swiss Army knife of tools that you can use. You not only get a distributed audio network where you can deploy these BLADEs wherever audio is needed, but you get a whole toolkit of functions that are very handy in the audio environment.”

That list of functions is very appealing to engineers, whether in television or radio, according to Owens. “We probably have 50 TV stations that are running BLADEs,” he said. “The reason is that they like that list, and the ability to plop a BLADE down in their newsroom and be able to send audio out to it and get audio from it.

“There are two major TV groups in the country that between them probably own more stations than anyone else that have standardized on IP and our BLADE system. They’ve done it for the reasons that I mentioned— they like that distributed control and audio gathering in a system where you can also put a stack of BLADEs in your tech center and have the equivalent of a large mainframe router.”


THE ‘NETWORK EFFECT’

Perhaps the most important benefit of AES67 will be the “network effect,” otherwise known as Metcalfe’s law. The term, initially applied to the telecommunications industry, recognizes that the value of a network is greater than the sum of its parts once it reaches a critical mass of users.

“There will be peripherals that could add functionality to our system,” Owens acknowledged. “For that reason we feel good about it and that’s why we incorporated it into our latest BLADE release, version 3, which we introduced at the end of last year.”

Yet, as AES67 currently stands, those third-party peripherals can be integrated only so far, since the standard lacks a control protocol, such as that incorporated into WheatNet-IP. “[AES67] is not going to come into its own until the control part gets added,” said Owens. To that end, an AES working group, X-210, was formed in late 2012 to develop a standardized protocol based on Open Control Architecture, the open standard communications protocol.

Ultimately, AES67 offers an insurance policy, said Owens, but it’s no substitute for a comprehensive system such as WheatNet-IP. “People will feel better about implementing a system that they know conforms to standards,” he said. “But it will never be the be-all and end-all of audio connections for the main audio gear, because that’s what we’ve all worked so hard on when we created our individual systems. Wheatstone has a top to bottom integrated system that includes consoles, routers, and peripherals that interoperate seamlessly over IP, but we can also envision a day when we will use a truly boundaryless network of things. AES67 is a move in that direction.”

Reproduced with permission from TV Technology

Loudness Control: 3 Things to Watch

DimensionThreeLoudnessHere are three critical things to watch on the mixing desk and what you need to know about them for effective loudness control.

1. VU indicator. The VU meter (now in digital bar graph form) has been around for 80 years for a reason. It’s predictable, with predictable integration times and predictable release times so you can predictably read volume units. Remember it is an averaging meter and the peaks are far higher than indicated. For this reason you can expect to have about 20dB of audio headroom above 0dBVU to encompass them.

2. Peak level indicator to read the transient peaks of the signal. This indicator tells you if peak levels are in danger of overloading the dynamic headroom limitations of the console. The clipping point is usually at 0dBFS. “Clipping equals distortion, so don’t go there unless you absolutely have to. Stay within a reasonable gain structure that is not going to cause distortion,” says Steve Dove, Wheatstone Minister of Algorithms. Peak signal levels run usually at or above -20dBFS, with transient peaks kicking up to about -6dBFS occasionally.

Read More...

3. Loudness indicator for compliance with the ITU BS.1770-3 and similar television loudness standards. This indicator came about initially in response to the need to assess and regulate the loudness of adverts compared to regular programming. The Loudness Unit Full Scale (LUFS) or Loudness K-weighted Full Scale (LKFS) measurement shows the averaged loudness level of audio over time, usually much longer than that of a VU meter. “Ideally, you should measure at a very long integration time (30 seconds), because that would be most accurate. But if you need to know fairly quickly if you’re going to be over the top or too far under, then you might want to go for a shorter integration time of, say, 3 or 10 seconds,” says Dove. The average loudness target level is -24 LKFS or -23 LUFS, depending on your location. By the way, you can’t miss this on a Wheatstone audio console – the LKFS/LUFS numbers are two inches high on the display screen. One LU (loudness unit) is equivalent to 1dB, so there's a direct correlation between how far the meter says you're over/under and how far you move a fader to compensate.

The Scoop on Codecs for IP Audio

CodecIllustrationUsing the Internet for audio distribution makes sense, but the problem is a little like the holiday rush at the Post Office.

There are simply too many packets of data for the pipeline.

Read more...

twitterfacebook