Coherence and Time Alignment awareness
here are scenarios where time alignment and coherence are of great importance. Specially when you want to achieve better, clearer and well managed sound, and obtain a well defined stereo or multichannel sound image. Back in the analogue days this was impossible to achieve. It was thanks to digital technology that audio coherence and time alignment correction became possible. Nowadays, all digital mixing consoles and DAWs have delay compensation at the tip of your fingertips. Adjusting delays can be a time consuming task involving measuring distances between sound sources and careful microphone placement. But things have now changed with our unique OnSoundGo OnTime plugin. It will instantly tell you the exact delay compensation needed for perfect signal coherence and therefore maximum sound quality and clarity.
The problem of incoherent sound production is constantly present at the time you mix two or more microphones on the same environment. Such microphones are going to pick up similar sounds signals or said technically, much correlated signals taken in the same time but in different places.
here are many situations where sound coherence is a crucial factor. Let’s see some examples:
- Two or more people each wearing a lapel or lavalier microphone. The same sound source is picked up by every microphone. Even though the signal level differs from one mic to another, the small delay produces a slight but very evident comb filter effect.
- Classical music multi-microphone recording where a main stereo pair (coincident or not) is used and multiple spot microphones are also distributed within the orchestra. This mix is severely blurred due to scattered sound repetitions.
- Recording of a gig (in an open or closed venue like the Royal Albert Hall) where the ambience microphones are picking up the audience together with the very delayed direct sound.
- Broadcasting something big, like a sports event, where signals travel through different paths. The international sound usually comes along together with the video signal, maybe embedded with it, and the unilateral or personalized sound comes via an ISDN line or a different satellite link. Delays in these situations can be really enormous.
- Placing the microphones as close to the sources as possible to improve the relationship between desired sound sources' signal with respect to unwanted signals (S/N ratio).
- Using directional microphones (unfortunately, today we still use first order polar curves -cardioid, hyper and figure eight- these are a timid attempt to improve the directivity. We will have to wait to see what DSP and multi-microphone arrays have to offer in the near future).
- Sound source separation or isolation, whenever possible.
- Minimizing the impact of sound reinforcement. I’ve always been a fan of personal in-ear monitors instead of using ‘polluting’ stage wedges.