Calrec Craft Profile: Paul Special
Paul Special is a Freelance Broadcast Music Mixer who is regularly found mixing bands for ABC’s Good Morning America at Times Square Studios in New York. He also mixes music for other ABC shows including The View, as well as shows for different networks like 106 & Park for BET, and You Oughta Know for VH1, and Front and Center for PBS.
Although he mostly mixes for music based shows, he also does production mixing, such as True TV’s The Impractical Jokers live 100th episode. Other high-profile productions include ABC’s Election coverage and Dick Clark’s New Year’s Rockin’ Eve coverage as well as other special report shows.
As a freelance engineer Paul has garnered many Gold and Platinum sales awards and GRAMMY® nominations as well as a Billboard Chart Award for a record that was number one for almost 1 full year – the Pokémon Soundtrack! As part of the Good Morning America team, Paul and the show have been nominated for and won multiple Emmy® awards.
Photo used with kind permission of David Jensen of Studio Consultants Inc.
You mix high profile events like the New Year’s Eve show and regularly mix for popular news/entertainment shows like Good Morning America. What career path did you take to end up where you are today?
I started at Record Plant Studios here in New York City where our main focus was rock and pop records. I was involved with recording the likes of Aerosmith, Paul Simon, Joan Jett and The Blackhearts, and The Smithereens to name a few. My journey into TV and live production came about because Record Plant had two remote trucks, and I started doing a lot of work on those trucks for live records and specials like Live Aid, Farm Aid, Woodstock 94 & 99 and the Rock and Roll Hall of Fame Opening Concert. I learned a lot, not only about live recording, but also about TV production while working on those trucks.
Over time I began to work on less studio records and more live concerts, and live TV shows where music was an integral part of the show. Shows like American Idol and X Factor were gaining immense popularity. I worked on a number of similar shows, most notably was Nashville Star on NBC and USA. The show had a great run and was on for 7 years and I mixed all the guest bands every week.
I got involved with Good Morning America around 2000. At that time, they did not have a mix room for the live bands, so they would hire remote trucks to do the bands. I worked on the trucks for many of those performances. When they decided to install a live mix room (Audio B) they asked me to get involved as I was familiar with the workflow for music performances on a live to air news show.
Currently we average about 3 music performances per week. The acts can vary wildly from day to day. One day we may have on a rock band like U2, the next day will be the cast of a Broadway musical like Jersey Boys and then the next day will be a pop act like Taylor Swift. It really keeps me on my toes bouncing from genre to genre!
Once I started working at GMA, I ran into Robert Agnello who is Director of Technical Services at Times Square Studios. Robert and I have known each other for over 30 years - our paths are constantly intertwining. We worked together at Record Plant Studios, then a few years after that we were partners in a recording studio complex called This Way Productions in New York’s SOHO district and when I came to work for ABC, he was already working there!
I’m still freelance and that leaves me open to pursue other stuff as well. I still do a lot of concert DVDs, I recorded the last live CD and concert DVD for Motorhead, I also recorded and mixed the live broadcast of LCD Sound System’s last show from Madison Square Garden as well as the DVD and CD of Jeff Beck’s Rock ‘N Roll Party Honouring Les Paul which was seen on PBS. The Jeff Beck album was nominated for a GRAMMY in the “Best Rock Album” category. The Foo Fighters won that year, but I’m okay losing to them! It was an honour to nominated.
I still occasionally do studio records mostly for up and coming artists Like Tom Nieman, The Walking Tree and Robert Hill, to name a few. But somehow I’ve always ended up coming back to the live element in some sort of video/TV aspect.
In that time, what are the biggest shifts that have changed the way you work, and affected the programming output?
The biggest shift I’ve seen is the move to digital. I used to have an analogue API in my room, a wonderful sounding desk but not bulletproof or flexible enough for the ins and outs of daily broadcast. As with all analogue boards you had to stay on top of the switches, knobs, buttons and components. Even though I loved the sound of the desk it was not a good platform for the rigours of every day broadcasting. Maintenance took more and more time and would sometimes get in the way of our already tight schedule.
That was when Robert (Agnello) said they were putting a dedicated broadcast desk in Audio B and they were leaning towards the Calrec Artemis. Truth be told, I wanted a Studer or a Lawo because I’d used them on the trucks that I work on. After seeing some Artemis demos my feelings were mixed, although I was warming to it, I thought. “Holy cow, this thing can do everything - there’s nothing it can’t do!”
At the same time I thought, “Holy cow this thing can do everything - it is a bit overwhelming!”
I realised I needed to listen to it as well as push the buttons and understand the theory of operation. Coming from a recording background sound is very important to me, so I did some sonic evaluations of the EQs, compressors and the sound of the desk itself. I realised this was a board that I could be very happy with.
I was still a little overwhelmed with the breadth of customisability and the vastness of the IO; it’s a lot of desk. Now that I’ve had it for a couple of years I can’t imagine how I would work otherwise.
There is a lot of buzz around various networking protocols at the moment including the proprietary Hydra2 solution Calrec offers. How important is audio networking? What networking elements are incorporated into your workflow?
It’s important as it’s definitely where we’re going. The more on-board you can get with that technology, the bigger shows you can have with more input and output capabilities. The biggest problem is there are many different protocols and sometimes it can be a daunting task to choose one protocol and commit. There’s a lot of players in the field such as Rocknet and Dante, and those platforms have real footholds.
Companies like Calrec are stepping up their game and realising it’s not just an audio desk, it’s a router. That’s very much in line with what you’ll find in any broadcast facility, the idea of being able to route signals from any point to any other point easily and efficiently is key. So far I’ve been really happy with the route-ability of my desk and I’m really looking forward to when our A room gets the Apollo in it and we’re on one big Hydra2 network with full network control and more capabilities available to us.
The partnership with Calrec and DiGiCo is really exciting because over half of the music acts coming in will specify a DiGiCo desk for monitors. Other manufacturers desks are tied to their input/output structure (48 x 24 or 96 x 24) whereas the DiGiCo is much more flexible with the amount of IO you can have on the system, you could have a desk that is 16 x 48 or 96 x 32 or 48 x 24 and many engineers prefer that flexibility as well as the sound of the DiGiCo’s.
Tell us a bit about your audio operation. What challenges do you face when mixing a news/entertainment show?
The biggest challenge is the schedule - the 1am start is rough!
We deal with a lot of inputs. A typical act would be anywhere between 32-48 inputs, the super-groups may have 70-80. We do a one-day wonder; we walk in at 1am, set everything up, sound check it, do the show, tear everything out, and we’re done by 11am. When you’re dealing with an act with 80 inputs that’s a lot of work to get done in a short amount of time.
Usually we don’t have the studio available to us the whole time, we have a specific window of opportunity to do our sound check. Generally speaking I’m constantly against the wall with what time I get in, what time the band’s gear shows up, what time their crew shows up, getting everything wired up, faxed out and checked before getting the band on stage, working out their monitors and getting them to actually play.
My workflow with the Artemis enables me to set up a few basic starting shows that cover most of my IO needs and processing for about 90% of what I’m doing. I can recall that show, patch everything in and get working right away. One of the things that I’m very excited about is getting the preset libraries on the next software upgrade. That would really step it up.
With analogue desks every knob and button is reset by hand. Digital desks have instant recall so we can blow through a line check really fast and be ahead of the game that way. I’ve seen our operation get much more efficient having that functionality. Also, hum and buzz issues have pretty much gone away since the Hydra mic preamps live near the stage and get to the desk via fibre.
In terms of console technology, what can you do with today’s consoles that would have been impossible to do when you first started?
Today we can handle a huge amount of IO. Right now my rig has 96 mic pres, 64 channels of analogue IO, 96 channels of mic splitters, 6 MADI boxes both with 128 in/outs and 96 channels of AES IO. That’s 100s of IO streams. I have a 128 track Protools rig as well. My previous analogue setup had 48 mic-pres with no effects returns and that would have been the amount of tracks I could record on Protools. The biggest thing is that limitations on IO have virtually gone away. The kind of IO we have hooked up to the Artemis is staggering.
I like the replay function on the Artemis – it’s a really easy way for me to do a sound check and then work on my mix; I typically only get 1 or 2 passes of the song with the bands at Soundcheck. So, many times I need to work on my mix a bit during the time the soundcheck finishes and we go live to air. I typically monitor the mic-pres on input 1 while recording the mic pre output directly into Protools using the routing available on the Hydra2 network. Then I can use the replay function to switch to input 2 which will be the output of Protools, again using the routing available on the Hydra2 network. I can then continue to manipulate my EQs, compressors and effects sends, and then easily go back to monitoring my mic-pres when I’m live on air. The replay function allows me to switch quickly and seamlessly between my mic-pres and playback and really dial my mix in so that I am happy and the bands are happy.
Has the way you mix changed to cater for an audience that now watches on mobile and handheld (tablet) devices, often while on the move? How has it changed?
As a music mixer I usually send my mix to a production mixer who’ll then feather it into their mix. I still listen to what comes through the TV or device to hear how my mix translates to it.
There are some changes I wouldn’t normally do in a studio record mix. I’ll tend to push the low frequencies more for a broadcast mix to get a nice sense of air moving as people tend to be listening on smaller speakers and this will give a little more edge to the punch factor. Also I’ll minimise my stereo spread so if someone is only hearing one channel it still makes musical sense.
Another thing is dealing with my surround situation regarding what I’m putting in different speakers. When you’re dealing with a production mix you have all your dialogue elements up the centre, music and FX to the left or right and maybe the audience reaction in the surrounds. In a music mix it’s difficult to put some things in the centre channel. It sounds great in surround but when that folds down to stereo or mono it can change the balance. What may potentially work for a dialogue element might not for a snare drum!
Words that you never hear coming out of the video control room from your director is “Oh my god, the host’s mic is so loud, turn that down!” If in the process of the downmix the host’s mic gets little louder nobody is going to complain but if the snare drum becomes four times louder in the stereo music mix you’ll hear about that!
What is the most satisfying thing about using an Artemis to take on these challenging jobs? Are there any features you find especially useful? How does the Artemis differ from previous consoles you have used?
The whole aspect of Hydra2 being able to port from one interface to another and not go through the board is great. A big part of my workflow involves the music inputs and performance microphones coming through my mic-pres, but I also send an ISO to the production mixers as they need that performance microphone for an interview, which will need to be gained and EQ’d differently. In the past I would have to take a port or direct out of a channel, go to a DA and either line trim up or down depending on what they need. Now I just take the mic pre and send it to my desk and then take the mic-pre and send it to their desk and we’re both happy. He can gain it how he wants with a trim and I’m still dealing with the mic-pre and it works out great.
Another fabulous workflow for us is that we have 96 of the Hydra2 mic-pres that are in stage boxes, connected by fibre, and we also have the mic-split option. This works out really well because not only do we have all the sources coming in for the music mix, but they can also be split out to go to a monitor desk so the act can hear their performance. One thing that is exciting is the partnership with DiGiCo and Calrec, and the idea that both platforms will be able to sit on the Hydra2 network. That will save a lot of analogue copper infrastructure that we have right now.
In my opinion the Hydra2 mic-pres are as good as any mic-pre I’ve heard. They are super clear, very transparent and the transient response is fantastic - I really like it! Most people who come in to the room say, “Oh, but isn’t that a broadcast desk?” I tell them “Trust me it’s going to rock, it’s going to sound like you think it’s supposed to sound like!” At the end of the day, they all say, “It sounded great, it rocked, just like you said it would!” I have to say that Calrec have really outdone themselves with this design, so whatever it is that they did on the Artemis, keep doing it!
I think sometimes if a company focusses mostly on broadcast, functionality is sometimes higher on the list than sonic quality and the reverse may be true for a company that focuses on the music environment; functionality goes out the window. Where I am now with the Artemis, I am able to realise both of those worlds. Great sound and great functionality.
What do you think the future holds for broadcast technology? Will we see a noticeable shift to centralised working via remote production?
It will happen. We will be able to do this from home, maybe not in our lifetime but the throughput with networks will mean you won’t physically have to be in the space to be able to do that production. The director will be in one city and the audio mixer in another.
Good Morning America is a good example of that. On any given day we’ll have anywhere from 10-12 different remotes from all over the world. While there’s still some delay, it will only get faster and faster. Think about where we were in 1975 to now - flash forward another 40 years I don’t see that as being an outlandish vision at all.
AoIP and audio networks based around routing capability over mixer-surface capability is definitely in the next 3-5 years, especially in the major networks. The idea of a centralised hub which all the IO, in the building comes into, and all those resources shared by the different control rooms in the building - or even the wider network outside of the building- is already happening.
For Good Morning America, ABC’s master control is on 66th street, and the Good Morning America studio is on 42nd street. The amount of resources that are shared between them is huge; half of the tape machine and playback devices and graphics are on 66th but are being streamed and put into the live-to-air show in Times Square. The whole show then goes back up to 66th street master control for the network feeds for satellite/cable distribution.
It’s not a stretch to say that in the next 3-5 years the audio IO will be spread out, not just processing a signal locally and shipping it off, but having the central core in one location and everyone pulling off from it. A lot of investment is happening now to make that happen at ABC and other places.31/08/16