Coherence and Time Alignment awareness

T
here are scenarios where time alignment and coherence are of great importance. Specially when you want to achieve better, clearer and well managed sound, and obtain a well defined stereo or multichannel sound image. Back in the analogue days this was impossible to achieve. It was thanks to digital technology that audio coherence and time alignment correction became possible. Nowadays, all digital mixing consoles and DAWs have delay compensation at the tip of your fingertips. Adjusting delays can be a time consuming task involving measuring distances between sound sources and careful microphone placement. But things have now changed with our unique OnSoundGo OnTime plugin. It will instantly  tell you the exact delay compensation needed for perfect signal coherence and therefore maximum sound quality and clarity. 
The problem of incoherent sound production is constantly present at the time you mix two or more microphones on the same environment. Such microphones are going to pick up similar sounds signals or said technically, much correlated signals taken in the same time but in different places.
T
here are many situations where sound coherence is a crucial factor. Let’s see some examples:
  • Two or more people each wearing a lapel or lavalier microphone. The same sound source is picked up by every microphone. Even though the signal level differs from one mic to another, the small delay produces a slight but very evident comb filter effect.
  • Classical music multi-microphone recording where a main stereo pair (coincident or not) is used and multiple spot microphones are also distributed within the orchestra. This mix is severely blurred due to scattered sound repetitions.
  • Recording of a gig (in an open or closed venue like the Royal Albert Hall) where  the ambience microphones are picking up the audience together with the very delayed  direct sound.
  • Broadcasting something big, like a sports event, where signals travel through different paths. The international sound usually comes along together with the video signal, maybe embedded with it,  and the unilateral or personalized sound comes via an ISDN line or a  different satellite link. Delays in these situations can be really enormous.
This issue could be surnamed as acoustical pollution or acoustical  leakage. You end up getting sounds when (not where) you do not want them. Sound technology often involves placing several  microphones in a given environment. This is easy to understand. But problems arise when we have several sound sources pretty close together or when a  dominant sound source is louder than the rest. Each microphone picks up it is designated sound source, but also the stray sound waves from the other surrounding sound sources. This is such a common situation that it may not seem a like a big problem. If we refer to the microtime issue (a topic you can see in  the forum), we see that on the one hand, the speed of sound is relatively low regarding  the space scale we are dealing with, and on the other hand, the perceptual characteristics of human hearing, in  their adaptation and evolution to the environment, make the sum of a signal with itself minimally delayed result in  artificial and weird information patterns making it difficult for our brains to interpret. This property of human perception, related to the Haas effect, is that we  used to spatially locate sound sources. A nature sciences lecturer would  say the origin of this is an environment adaptation defence mechanism that enables us to locate  where the hungry lion is to run in the opposite direction. Therefore, full microtime understanding and perfect delay management are powerful tools that, when used smartly, can give you the opportunity to produce outstanding sound. Eventually, the mastering of these techniques will become common practice in sound capturing technology. As with most new technological innovations, it will first only be used in large and expensive productions, and as time goes on it will gradually spread to the  rest. The problem with acoustic leakage is as old as the sound technology itself.  Several techniques have been used to cope with the problem:
  • Placing the microphones as close to the sources as possible to improve the relationship between  desired sound sources' signal with respect to unwanted signals (S/N ratio).
  • Using directional microphones (unfortunately, today we still use first order polar curves -cardioid, hyper and figure eight- these are a timid attempt  to improve the directivity. We will have to wait to see what DSP and  multi-microphone arrays have to offer in the near future).
  • Sound source separation or isolation, whenever possible.
  • Minimizing the impact of sound reinforcement. I’ve always been a fan of personal  in-ear monitors instead of using ‘polluting’ stage wedges.
After all the above techniques have been applied, the only way to improve sound quality even further is to use the time alignment concept. The idea is simple.  If a dominant source is captured by more than one microphone, we have to try to get that dominant source's signal to be perfectly in phase (or aligned in time) throughout the entire mix. It's easier to imagine if you consider the dominant  source being an impulse. OnSoundGo! has been working on this problem for over four years having  created a consistent Time Alignment Theory. They have studied many  examples that prove such premises and finally, they have developed  tools, like their OnTime plug-in, to detect, measure and quantify alignment problems.

Sorry, you can not to browse this website.

Because you are using an outdated version of MS Internet Explorer. For a better experience using websites, please upgrade to a modern web browser.

Mozilla Firefox Microsoft Internet Explorer Apple Safari Google Chrome