Our latest Craft Interview is with AMP VISUAL TV’s Xavier Fontaine, who talks us through the challenges of broadcasting one of the world’s most gruelling major motor events – the annual Le Mans 24 hour race – in the worlds biggest outside broadcast truck.
Even as a child, Xavier was passionate about audio and hi-fi, leading him to study electronics and audio at university. He started his professional career as a Sound Engineer at France’s sports network «L’équipe TV » before making the shift into mobile video with AMP, becoming AMP VISUAL TV in 2002. AMP VISUAL TV maintains one of Europe’s largest fleets of OB vans and offers complete, end-to-end services for live and on-location television productions. Their latest unit, MS12, boasts the world’s largest surface area – 76 square meters – and houses both a Calrec Apollo and an Artemis.
Although he specialises in theatre recording and performing arts in general, Xavier works across all genres of live programming from sports to live entertainment, and has taught microphone technology and about digital mixing consoles at the University of Valenciennes since 2000. A very busy and knowledgeable man, and ideally suited to heading up a hi-speed, 24 hour, super-car extravaganza!
Ready? Start your engines….
Can you give us an overview of the audio topology at this year’s Le Mans?
There are 70 mics covering the race, most of them placed directly on cameras. There are 27 around the track, 19 on the pitlane (on cameras or hidden in front of garages), and 14 “on board” in cars. The other mics are ambient mics used to catch crowd noise and ambience to fill the gaps in the general mix.
Depending on the focal length of each camera we use short or long shotgun microphones. The handheld cameras are equipped with M-S stereo mics and the garages have hidden lavalier mics which are heavily protected against rain and moisture.
We use 130 channels in total; some mics are patched to several different channels and are parts of different mixes. The pitlane is an independent signal (with its own mix), which is integrated in the main signal when needed.
Can you describe the signal path from trackside to broadcast?
A monomode optical fibre mesh is shared by audio and video across the site. Most microphones are plugged into cameras, so their signal is embedded in the video from the camera. As the OB van video router is fully processed, there is no need of external de-embedders.
Cameras can be either wired or wireless. In the first case, they are plugged into monomode fibre links; in the second, they are received by our RF MCR (with the use of remote antennas dispatched around the track, also using fibre links).
The garage mics are connected to local preamps and embedded in each covered garage. Preamps and embedders are located in an underground gallery which runs under the garages to allow the technicians access in case of problem.
Non-embedded audio and intercoms are routed through a Dante Network with I/O devices placed wherever signal transport is needed, or in one of our three Calrec Hydra2 stage boxes, all connected via monomode optical fibre links.
Motorsport is a fast-paced, high-energy sport. What techniques do you use to relay this to the viewer at home?
The most important thing is that audio always follows the picture, hence the choice of mics placed on cameras. They give depth to the picture by accentuating the distance between the car and the camera.
The on-board RF cameras and mics are very immersive and give the driver’s point of view which can be very spectacular, especially in cases of accidents or a battle between two cars. They also allow the most passionate viewers to hear and recognise the sound of the engines running at high speed.
We also receive radio communications from 18 teams, as well as the race director’s announcements which are pre-selected and replayed by a specific operator. They shed light on the race events and explain the main teams’ strategies.
What challenges did you encounter and how did you overcome them?
Setup is huge in terms of quantities of signals, but there are three main challenges: the distances, the weather conditions and the duration of the event. This meant having to divide staff into different teams dedicated to one specific task, or to one local area of the circuit. Due to the rain and the duration, there was a real need for servicing (such as replacing a wet microphone or an intercom device’s battery), and the distances meant having a multiple local teams to solve any problems quickly.
It was also a big task for the RF team to operate so many RF links (cameras, microphones and intercom systems), over such a distance and in an overcrowded RF environment.
This was the first outing of the MS12 truck, the first AMP truck built with Calrec desks. Which features on the desks made your job easier?
As said before, the relationship between the picture and the sound is very important. As there are a large number of sources, we needed to automate the mixing. We used the autofaders feature a lot, combined with the EMBER+ virtual GPIO. This last feature was very useful to avoid the need of a large number of GPO from the video router, and GPI in the console system.
The autofader interface is very user friendly. During the first qualification session we were able to easily fine tune levels and timing for each channel in order to guarantee smooth transitions.
Was this your first time mixing on a Calrec console? How did it compare with other audio consoles you have used?
The system is very powerful; the ability of merging two routers via a Hydra2 link brings power and convenience; it is very easy to set up and allows the sharing of physical resources between the OB’s two consoles. In fact, the final user doesn’t really need to know to which console core a resource is attached to. This feature is new for us, and it has really simplified the OB van’s engineering.
The Apollo interface is extremely user friendly. I appreciate the ability to arrange the surface layout as I want. As there are lots of physical controls it is possible to have a layout where everything is in direct access which is very convenient. The Artemis is more compact, but the big TFT touchscreens bring added visibility.
We used the second console – the Artemis – to premix and monitor the commentary positions given to different broadcasters. This was mixed by a colleague who was not part of the engineering project and therefore didn’t know the Calrec consoles at all. After a two hour explanation about the system philosophy and the desk ergonomics, he was able to do his task; it shows that the console is easy to understand and operate.
Audio networking is prevalent in modern large scale sporting events. Have you utilised high-density signal transports?
Yes, we have used high density signal transport. Firstly, inside the OB van we have four MADI tie-lines (256 audios in and out) with the processed video router. All the audio from and to video equipment (Mics on cameras, EVS and VTRs, embedded feeds etc.) pass through these tie lines. They are totally transparent for the user since we pilot the consoles and the video router with VSM.
All the tie lines between the console and the intercom matrix (transiting IFB’s PGMs, Commentary position’s “On Air” etc.) are made with a Dante network. There is also the Hydra2 link between the two consoles’ cores.
Outside the OB Van, in the Le Mans setup, we have built a Dante network using part of the fiber mesh to transit the audio and intercom signals from different points of the race circuit. This is managed by an audio MCR built aside the OB van, which was in charge of routing these signals to the proper destinations. The MS12, being part of this network, had a Dante link set up between the Apollo core and the MCR.
We also used three Calrec stage boxes on Calrec’s Hydra2 transport, linked by monomode optical fiber to the Apollo core positioned in different locations in the pit lane. When our video colleagues had to transmit or receive signals and were using their video stageboxes (the video router is a Riedel Mediornet system). We added “Rocknet” I/O modules to avoid using a pure audio stagebox.
We received the audio from the RF and on-board cameras embedded in the video, so we exchanged a MADI link with the RF MCR in order to transit backup and ancillary audio.
The race was over 24 hours (as the name suggests), how did you cover all of it?
We were four audio people inside the OB van, two mixers, a guarantee and an intercom operator. In fact, we relayed at the consoles every two hours in order to stay focused. During the night, from 11 PM to 6 AM, a dedicated team came as reinforcement allowing us to sleep a little bit more than two hours.
As viewers increasingly consume content in a variety of methods, combined with the clean feeds that are fed to other broadcasters, many more mixes are required. How do you manage to provide mixes for so many different outputs?
The MS12 has two separate audio rooms allowing us to mix two different programs at the same time, such as an international feed and a host broadcaster signal, for example.
Thanks to Calrec’s consoles’ ability of having two real monitoring sections on one desk it is possible to have two mix engineers on the same console; one using the speakers, and one equipped with headphones (with full monitoring capacity including PFLs). This is very useful in cases of a “Dirty” and a “Clean” feed with different EVS playouts or multilateral interviews.
Furthermore, the use of the “autofaders” and the “automix” works very well on commentary position’s functionalities, which allowed us to lighten some mixes.
In terms of resources, the number of groups and mains available is no longer the limitation it used to be on some older systems. It is now possible to build huge configurations which are very simple, without diverted routes.
How has the truck been designed to withstand the ever changing demands put upon broadcast infrastructures? i.e. AoIP, immersive audio, interoperability etc.
The design principle for the truck was to be as modular as possible.
Its working areas have been designed to have multiple uses. For example, it is possible to use one area for video shading or as an EVS room, or as an audio editing room. The furniture, the cabling and the monitoring (audio and video), has been designed for different kinds of uses.
Technically, we use VSM to control all the equipment. It means that it is easy to modify the low level infrastructure by adding or changing equipment which will bring new resources or a new format of inputs and outputs. Provided all your equipment is connected together with a large amount of tie lines, VSM provides the ability to route a signal from one point to another without considering its full path. For example, a signal can come embedded in a video, from the video router, pass through the console and be fed to the intercom matrix for listening purposes with only one patching action on a VSM X-Y panel.
We also chose to use Dante for many audio links, such as the ancillary audio monitoring, inside the OB van which allows us to easily modify the infrastructure by adding or removing devices. This monitoring can now be stretched by adding Dante monitors to extend possibilities in one area or add a new area outside the OB van.