Mike Abbott, established A1 mixer and owner of All Ears Inc., has had an extensive career mixing audio for some of the most popular TV and high-profile awards shows. For more than 25 years, Abbott has worked with various Calrec consoles, but his current console of choice is the Apollo, which he’s using on the current season of The Voice.

Can you provide some background on any notable recent projects/shows you have worked on?

As I started listing my audio mixing & engineering projects in 2018, I realized that it has been a very busy year so far. I am thankful for the clients who have requested my services. The year 2017 ended and 2018 started with the Fox Times Square New Year’s Eve 2017 telecast, followed by the SAG ® (Screen Actor Guild) Awards, DirecTV Super Saturday Night Live with Jennifer Lopez in Minneapolis during the Super Bowl, 60th Grammy Awards at Madison Square Garden and the Independent Film Spirit Awards during the first quarter of the year.

From April to July, I mixed the audio for the Streaming Media of the “March for our Lives” rally in Washington D.C. with a 150,000 people in attendance and millions more watching online, an NBC Project Musical Reality Pilot, Songland where the premise is like The Voice meets Shark Tank, The Voice lives and The Voice Season 14, the 10th Season of Shark Tank, The ESPY Awards, Sound Design for the MGM/CBS Project TKO, a Shawn Mendes Apple Music streamed concert and consulting for DirecTV on several 4K TV concerts.

From August through December, I worked on Stand Up 2 Cancer at Barker Hanger that aired Live on 70+ networks and broadcast streaming outlets and will start the tapings of blind auditions for the 15th season of The Voice. In October I started working on a new series, Pod Save America, that broadcasts live on HBO from four cities over four weeks, in November-December I will follow The Voice live season for 15 broadcasts and then onto 2019, which is shaping up to be another busy year!

How did you get into the industry?

I started straight out of high school in the 70’s building speaker cabinets for a sound company that provided sound reinforcement for touring rock bands, where I was taught how to build transformer isolated mic splitter systems, assemble hand-wound inductor coils for passive 2-way crossovers, learned how to identify frequencies using 3rd octave graphic equalizers, worked as the 3rd man on the sound crew tasked with stacking speakers, AC power distribution and loading the truck quickly and efficiently. The hands-on apprenticeship training led me to being offered the FOH and foldback mix positions for Rock, Pop, Jazz, Latin & Classical musical acts, which provided me the opportunity to travel and work around the world for 15+ years.

In 1982 I started mixing as FOH and stage foldback for various TV projects such as the 1984 Olympics’ opening and closing ceremonies, Academy Awards and the GRAMMY Awards. In 1986, I was hired at the then fledgling Fox Network, the day after the premiere of the Late Show with Joan Rivers, as a staff mix engineer. In 1986, I moved over to CBS working as a staff audio mixer at their Television City facility for six years. Starting in 1994, I spent six years working at Paramount Studios on the syndicated entertainment news show Entertainment Tonight and Leeza talk show for NBC.

Working at the TV networks and production facilities provided me real world hands-on training and a broad understanding of broadcast audio workflows. During my tenure at CBS, I was assigned to mix talk shows, game shows, sitcoms, soap operas, variety specials, post production, promo production, sporting events, network news and ENG remotes. I was given the opportunity to work in these disciplines and it provided me with a diverse range of skill sets.

What is the audio set-up for the type of TV shows you do?

Talk shows and game show projects are usually staffed with a production A-1, two to three Floor A-2’s, a PA mixer and, if needed, a foldback stage mixer. These shows vary with their production schedules; a typical four-day production provides for an ESU (Equipment Set Up) of five hours and we are on-camera and rehearsing with musical talent or stand-ins after the meal on the first day. Then, there are eight to ten hours of rehearsals for the next two days with a VTR or live show on the fourth day.

For The Voice live shows, we have a production A-1, production audio track playback/recordist, broadcast music mixer, music mix recordist, four to six floor A-2’s, a FOH and foldback mixer, two foldback assists, a sound system tech and an RF coordinator monitoring the operation of 40 + RF Devices, 17 audio mixers and techs. During the Voice lives, which run six to seven weeks each Spring and Fall, our production schedule provides for three 10-12 hour days of rehearsal for the 15+ musical performances on our Monday-Tuesday broadcasts.

On show days, we have a technical cue to cue in the morning and a dress rehearsal in the afternoon. The amount of rehearsal time in the schedule the production company provides is an extra production value, which in turn produces the “Big Shiny-Floor Broadcast” The Voice is known for.

Tent-pole event productions, such as the Super Bowl, Academy Awards & GRAMMY Awards can be staffed with 40+ audio mixers and techs. At the GRAMMY Awards, there can be 60+ audio mixers and techs depending on how many artists bring in their own tour audio stage foldback systems. These techs can be deployed to 7-13 mix stations, three performance stages inside the venue and three broadcast mix platforms located outside of the venue.

The GRAMMY production schedule provides a pre-cable install day, where four A-2’s run the fiber and analog mults, one day of ESU that can include, with time allotted, a tech cue to cue set where all the scenic elements are pre-set, and spike marks are put on the stage. This gives the production an idea of where potential staging issues may arise, audio is onstage coordinating with the stage managers how to best deploy personnel and hardware. For the next three days, there are 10-12-hour days of rehearsals with 7-10 artists rehearsing each day. On the day of the broadcast, we may start with a rehearsal of an artist followed by a 3-4 hour dress rehearsal.

As the three days of rehearsals are done out of sequence, the dress rehearsal is the first time we see how the set changes will work. Inevitability there will be logistical issues that develop during the dress rehearsal; we have a saying bad dress, great show. After the dress rehearsal, inter-department post mortems are followed by a quick reset for top of show and the 3+ hour broadcast starts shortly thereafter.

What is the workflow like?

The Voice has developed a very efficient workflow in terms of audio acquisition and how we coordinate with the post production department in providing audio resources for the video edit and audio sweetening of the show. During the blinds and battle portions of the season, these shows are taped and edited. The layout from season to season of the audio record tracks on the 14 XD record decks is standardized to allow the editor’s assists to easily locate audio tracks in the edit bay servers from previous tapings and or broadcasts from the past 15 seasons. We also provide a 64 channel DAW record as a safety audio record.

The workflow of audio deliverables for the live shows has an extremely tight schedule for turnaround of audio media for foreign distribution two hours after the broadcast. These elements are distributed around the world to six foreign outlets for playout the next day with local commercial and dialog integration. During the live part of the season, there are several musical segments that we pre-record because of artist availability. These pre-tapes are sometimes done weeks in advance.

In the event of complex scenic elements and staging logistics for a performance that cannot be set in a segment, we will pre-tape a performance 35-40 minutes before live broadcast. Afterwards the artist will sometimes request to do an audio remix of the performances in the voice music mix booth. The remixed music audio file is delivered to the production audio booth for integration back into the production 5.1 pre-record.

This element is then delivered to our video editor for layback into the performance video and the editor replaces the original audio file with the re-mixed soundtrack spotting the waveform of the remixed audio to the waveform of the original audio. Then, he will render that file into the video master in the edit bay to complete the audio edit. This integrated DNX file is then pushed back to the production truck server for QC and played out from the EVS as a discrete 5.1 audio mix during the live broadcast.

There is a sweetener in the production audio booth that provides sweetening to bridge the in/out of the playout against the live in-studio audience during the broadcast. The above tasks for the audio re-mix are completed in the 1/2 hour before the live broadcast.

Where do you see the biggest changes coming to TV in the next ten years (both live and pre-recorded)?

Entertainment trucks use MADI for the majority of data transport of audio. The NEP Denali fleet has started using IP-based protocols for mapping audio via the routers and record systems. Dante for audio transport has been deployed in a minimal mode in the entertainment sector to date.

A large I/O count to be properly deployed over a Dante network, requires a dedicated IT tech for managing the multiple switches and audio I/O for projects such as a golf tournament or a multi-studio production facility. In the entertainment market, getting the production company to understand this need for an IT manager is challenging due to cost restrictions.

Adoption and implementation of the AES67 standard will provide integration of audio over IP interoperability. This standard will provide for the biggest change to our workflow as we continue the transition from an SDI-based infrastructure to an IP-based infrastructure and workflow.

How long have you been using Calrec?

I started using the Q Series analog console in the 90’s, moved onto the Alpha and am currently on an Apollo.
Calrec platforms provide me with the hardware resources needed for the complex audio projects I work on.

What Calrec consoles do you primarily use now and on what projects?

80 percent of my projects are done on Apollo platforms. The Calrec Apollo is used on the majority of the entertainment projects I have been involved with over the past 10+ years.

As a sound mixer, what do Calrec’s consoles offer you that makes doing your job easier, helped your broadcast workflow or increased productivity?

The replay function on the Apollo console allows me to go from a production mix status to a re-mix of a post-show fix with a single push of a button. This is especially important when faced with a West Coast re-feed or quick music re-mix ½ hour before a live broadcast.

Fader swap is also a big part in laying out input sources on the console, especially when starting a new show. It allows me to re-map the fader assigns on the surface, as the show develops, I will move and consolidate the top layer with only essential faders that I need. Assigning GPI’s for AutoFader functions to faders, the available control parameters of this function allow for programming smooth transitions when assigning EFX mics to be triggered by the video switcher.

The Audio I/O resources on the NEP/Denali consoles are substantial (Denali Silver has an 8192 x 8192 I/O configuration), with this capability I am able to create workflows that enhances the audio deliverables I am able to provide. In addition to mapping primary mic sources to the audio records, with the I/O resource available on these platforms, I can provide secondary mic sources to the audio deliverables with attenuated head amps, which provide the audio post mixers dual sets of mic sources in the event the primary mic is over-modulated.

The multi-channel embedding provided by the console resource pool has allowed for populating up to 16 audio channels on video record devices, which in turn provides the post production editor a wider set of audio resources to choose from.

You have been a part of the GRAMMY Awards for many years – how has it changed over the last few decades?

Specifically, how has the equipment you use enabled you to push the boundaries of broadcast audio? Where do you see this show going in the future?
I started on mixing the GRAMMYs artist stage audio on a single analogue console, which I used grease pens for marking the settings of aux’s and fader EQ for 18-22 performances. Fast forward to today, to eight engineers and mixers for the stage audio alone.

The use of digital consoles has allowed for higher channel count per performance. We have 6-7 artists that bring their stage foldback console that we have to provide for mic signal distribution and this distribution is still in an analogue format. As there are at any given time 8+ mix consoles in play, this factor alone provides for a complex matrix for connectivity. We are constantly looking at solutions for audio signal distribution at a cost-effective basis compared to the current analogue platform.

As far as awards shows in the broadcast market, I think there is already a paradigm shift of viewers taking place because of the availability of alternative programming offered by streaming outlets, which in turn is diminishing the audience for the broadcasts. Streaming media is going to force producers to think outside of the box in next few years. This will force a reset on how these productions are produced in order to better skew towards the streaming media and the coveted 18-49 age demographic that producers & advertisers are looking for.

Do you think that TV viewers’ audio expectations have increased over the last few years? How have you kept up with these expectations and changes?

My goal is to provide an audio soundtrack with the highest audio quality and resolution for both live and posted projects. With 5.1 broadcasts trying to create the soundfield of the space in which the location of the show is broadcast from, this is one of the aspects that interest me most.

When I set up a mix, I attempt to provide a sonic perspective of “the seat in the 4th row from the front of the stage,” that is what an audio soundtrack for a live TV broadcast should be providing. Understanding how the audio is distributed by the networks and streaming entities that we provide the soundtracks for is part of the ongoing effort with our network broadcast partners in providing the quality and resolution end users expect.

Please submit your details to download this exclusive content

twitter @calrecaudio