Intimate Production Techniques

With the runaway success of Billie Eilish’s debut album (winning 4 Grammys) ‘When we all fall asleep, where do we go?’ it’s clear that ‘intimate’ or ‘mumbling’ vocals are in vogue. In fact, it would be accurate to state that ALL the sounds within a mix are produced with ‘presence’ or ‘intimacy’ in mind and not just the vocals. It might not be to your tastes but this type of production is fast becoming very popular amongst the ear-bud listening generation. If intimate productions are rocking your boat then the following techniques and processes should help you in achieving an up-close and personal mix.

The approach to achieving a close-up and personal sound rests in the use of existing old school technologies coupled with innovative new processes and a daring mindset. As technology moves forward so do production techniques – they are inexorably linked. Sadly, in today’s world of ‘let the software do the work’ ethos producers seem to be reluctant to use tried and tested techniques that engineers have developed for them. There is an unhealthy move towards using this type of ‘analysis and application’ software and we are seeing more and more software developers accommodating this requirement. From track analysis software to ‘one knob magic processes’ the role of the producer has changed to the point of being an assistant to the software rather than the other way around. The same can be said for the mastering market. It seems preset driven software is all you need to master a song nowadays. However, out of all this negativity, some good has surfaced – most notably in the design of multi-function plugins. We are seeing more and more plugins offering all manner of extended functionality for a required process; a good example would be FabFilter’s Pro MB (multiband compressor) which not only provides traditional downward compression but also offers upward compression and downward and upward expansion. FabFilter’s approach to designing plugins that offer all the required functionality inherent within a process is one of the main reasons that professionals use their plugins. Izotope have taken the multi-function ideology a step further and their products now come supplied with all manner of analysis and compensatory processing. Sound Theory’s Gullfoss is another product that offers adaptive processing in that you dial in a set of values for the basic parameters on offer and sit back and let the plugin do the work for you. Mastering the Mix is another company that has grabbed the analysis and application market by the neck and their products are not only useful and helpful guides but most can make intelligent suggestions as to how to alter values to achieve optimum results. In fact, many software developers are going down this route and it would be fair to suggest that almost all mixing and mastering tasks can be achieved by the use of these analysis and application software.

1 UP 1 DOWN

All of this whinging has led me to the first of the processes used in achieving intimate productions – that of combined downward and upward compression also referred to as two-way compression. But before I unleash all that is two-way compression let me gently foray, with you, into the world of compression. Every producer knows that compression is about manipulating the dynamic range of audio and NOT the volume. A long, long time ago in a workshop far, far away an engineer came up with this insane invention; he called it ‘the volume knob’. This crazy creation had the amazing ability to raise or lower the volume of audio. Mad huh? At no point did he, or his mates, pose the question ‘why don’t we call this volume knob thingy a compressor’? All jokes aside, audio compression is actually a very simple process and the subject has been eloquently covered by countless articles at this manor. With regards to intimate production techniques, the important distinction to outline is the difference between upward and downward compression and how these two processes affect the dynamic range of audio, which are critical in achieving ‘presence’, or ‘closeness’, in a mix context.  For this article, and to keep the processes in context, the dynamic range of audio is determined to be the difference between the loudest and quietest parts of the audio signal, and is expressed in decibels (dB). A downward compressor makes the loud bits quieter above the threshold whilst keeping the quiet bits below the threshold unaffected. An upward compressor makes the quiet bits louder below the threshold (by compressing and applying gain makeup) whilst keeping the loud bits above the threshold unaffected. By using downward and upward compression simultaneously, the audio signal’s dynamic range can be reduced from both above and below the threshold which results in a signal that does not have severely compromised peak transients had the same level of dynamic range reduction been attempted with just the downward compressor.

We now know that compression simply alters the dynamic range of the audio being processed but how does that help us with regards to achieving intimate productions? The answer is quite simple but it does require you to use your imagination in envisaging the shape of the audio being processed by using two-way (combined) compression. Imagine the audio is represented as a nice and squishy hotdog shape, no bread of course. Now imagine grabbing the hotdog between your fingers and squeezing its width. This is what happens when you apply downward and upward compression at the same time: you are, in effect, squeezing the audio both above and below the threshold closer to the threshold. If the quiet bits are louder and the loud bits are quieter then the audio is much easier to gain stage to a specified level. Let’s look at this in context; you have a busy mix and you are trying to get the vocals to sit nicely in the mix. Now, each time you drop the level of the loudest part of the audio the quiet bits get masked by other sounds. If you raise the quiet bits to be heard then the loud bits are too loud. Of course, you can use downward compression to narrow the dynamic range and afford a more balanced difference between loud and quiet bits but this comes at a cost; peak transients are compromised, and a better approach might be to use volume automation to control these gain variances, and this is the more traditional approach in managing variable gains within a mix but with intimate productions, we are trying to achieve an even narrower dynamic range without having to use aggressive downward compression. The idea is to use a range that acts as both the upper and lower gain limits for the audio to ‘travel’ in so that once left at the target gain value it can be heard clearly and in detail. Once two-way compression is used and the target value set no automation is required as the overall range the audio travels in is so small that the loud and quiet parts are almost at the same level and can, therefore, be left at one specified target gain value.

You can apply two-way compression as two distinct processes or as a single process. A novel approach in achieving two-way compression as two separate processes is to use the auxiliaries of your DAW to house each process. By disabling the routing for the channel the audio resides in and having only the auxiliaries active you can blend various amounts of each compressor mode to taste. You can also apply further processing to each auxiliary thus allowing for even more processing choices. I like to keep things tidy and use combined processing for two-way compression. The Waves MV2 (fig 1) is a fun tool for achieving both downward and upward compression simultaneously but it does not allow for selecting different threshold values, attack and release times or even offering control over the ratio. Hornet’s Dynamics Control plugin (fig 2) is far more versatile and offers an upward and downward compressor and a compressor/expander which acts to squeeze the sound around the threshold. You can alter threshold and ratio values for each compression mode and there is a global attack and release function to shape all the compressors. However, currently, the plugin has no make-up gain feature which I find a little surprising. This omission forces the user to use independent processing to control the output make-up gain. I use two-way compression on all sounds that exhibit a wide dynamic range. By narrowing the range the quieter elements of a sound are exposed and this translates across as ‘presence’.

waves mv2 downard upward compressor

 

ADAPTIVE PROCESSING

Upward and downward compression are great processes for altering the dynamic range of audio and thanks to the narrowing of the dynamic range quiet bits can be heard alongside the loud bits which means we can work with very narrow gain margins and still hear everything in detail. But this isn’t the only process we use to create intimate textures within a mix context. One very powerful technique that works but in a markedly different way to two-way compression is that of adaptive and compensatory processing and the best example I can think of to express this process is Sound Theory’s Gullfoss. Gullfoss is an intelligent auto eq plugin. It has only five parameters on offer and comes with a single page manual, but don’t let this simple and minimalistic approach fool you, the software is remarkably powerful and yet easy to use. The real power of this software is what goes on behind the scenes and the whole topology and product goal is based on a model of human auditory perception. The user dials in a number of settings for the 5 parameters on offer and the software creates an eq response that adapts itself continuously and dynamically as the audio is played. It is stated that the eq will counter the side-effects of phase problems and most notably with temporal smearing that takes place with the use of minimum phase designed equalisers and multi-mic scenarios. The eq is also constructed to avoid overshoots and ringing. However, with regards to intimate productions, it is what the proprietary perception modelling achieves when trying to expose masked frequencies and taming pronounced frequencies. The two main parameters are Recover and Tame. Recover exposes and processes frequencies that are masked and Tame suppresses and processes frequencies that are dominant and cause the masking problems in the first place. The final three parameters are control parameters for Recover and Tame and to balance the overall eq response. The beauty of Gullfoss is that once you have dialled in the necessary settings the software takes over and applies the desired changes in real-time and continually adapts to achieve the goals set by the parameters. With intimate production techniques, we are constantly trying to unmask and emphasise frequencies and bring them up in the mix closer to the more dominant frequencies that may, at times, be the cause of the masking. In effect, we are trying to make the quieter hidden frequencies louder and louder dominant frequencies quieter. Although this is not quite the same as narrowing the dynamic range of the audio the overall process does result in a similar type of ‘quiet meets loud, loud meets quiet’ scenario which, after all, is exactly what our goal is in trying to achieve an up close and personal sound.

NO SPACE IS GOOD SPACE

Reverb is without a doubt the most important effect used by producers. It is predominantly used to define the overall space that all the sounds within a mix will reside in. Well recorded sounds do not need any additional processing to define their location within a given space as the recordings themselves will expound the characteristics of the space the sounds were recorded in. However, even with well recorded stems, producers will still use reverb to glue all the sounds into a pre-defined space. In addition, reverb is used to express depth. By using the various reverb parameters, most notably diffusion and filtering, we can ‘place’ sounds within a space. By ‘placing’ I am referring to sounds that sit up-front or ‘at the back’ of a mix and not left and right pan locations. Intimate production techniques generally omit the use of reverb almost completely, with the exception of providing a reference (see; Perception), and instead use equalisation and filtering to achieve both depth and front-back placement. This type of approach requires a different mindset to the norm and the real skill of the producer is in achieving space and depth without the use of reverb. If you consider how sound travels then you will note that high frequencies dissipate much faster than low frequencies. We mimic this behaviour by using equalisation to low pass sounds that need to sit further back in a mix and high pass sounds to bring them up-front in a mix. Volume plays a huge role in the depiction of distance and by coupling this with equalisation/filtering we can create both a sense of space and how sounds sit within that given space.

Although mix reverbs are a no-no it doesn’t mean that other effects cannot be used. The aim is to not use effects that denote a given space but, rather, to use them to emphasise and colour a sound, and that includes reverb. I often use delay effects instead of reverbs to extend a sound’s sustain or to add the perception of depth. Delay effects have the advantage of not smearing frequencies like reverbs do as they only concentrate on a single ‘reflection’ as opposed to a multitude of early/late reflections afforded by reverb effects. Distortion is another process I use quite regularly to treat vocals instead of opting for an equaliser and with dry unaffected mixes it adds an extra layer of sparkle and definition. Harmonic exciters are potent alternatives to simple equalisation processes and they can often add sparkle and presence to a sound simply through the auditioning of generated harmonics. Minimum phase designed preamps are often used instead of minimum phase designed equalisers as they do not perform any corrective processing but instead colour a sound in a pleasing way that lends itself to minimalistic and intimate productions. All forms of harmonic distortion are highly useful when it comes to intimate productions as they often provide depth and vibrancy to staid sounds and work extremely well with minimalistic productions as they can be heard in their entirety thanks to the low track/stem counts.

If we use Billy Eilish’s production techniques as a point of discussion then you will note that reverb has been used quite extensively throughout the album. However, it is context that matters and reverb for colouring sounds is a ‘yay’ whereas master mix reverbs are a ‘nay’.

I often use reverbs to add presence to certain sounds but the way I use them is to remove almost all late reflections and work on early reflections, an imperceptible reverb decay value, a very short pre-delay, and tons of diffusion. In effect, I am removing all the parameters that would help in denoting a given space. Instead of the ‘reverb wash’ effect, I am aiming for ‘presence’.

PERCEPTION

A trick that many EDM producers use, and to be honest it is an old school technique, is to utilise an ambient looped or sustaining sound to act as the backdrop or anchor for the mix. All the other sounds in the mix will then be referenced against this ambient backdrop. The psychoacoustic effect of having a sound referenced against an ambient sound is that the sound is then perceived to be more pronounced and clearer. In EDM music you will often hear pink noise sweeps appear at breaks, bridges and prior to the drum beat and bass-line progressing into full flow. The effect is quite noticeable in that the drum beat and bass-line sound stronger, deeper, and clearer. But more importantly, they appear to be sitting in their own given spaces even though no effect has been used to denote the space for the mix. It is the reference that tricks the mind into thinking there is depth and width to the mix.  With intimate productions, the vocals are usually used for all sounds to be referenced against. If you listen to Billy Eilish’s Bury a Friend you will notice the vocals have been ever so slightly affected. Once the mix sounds are referenced against the vocals the perception of ‘space’ takes over the whole mix.

Complimentary processing also works in tricking the brain that one sound is more pronounced than another. Whereas with ambient referencing we are referencing sounds against a single texture/colour, with complimentary processing we are actually highlighting one sound against a like for like sound. A good example of this type of brain trickery is when we come to processing low-frequency sounds that share similar frequency ranges, and the best example I can think of that has cost many a sanity is marrying the bass and the kick within a mix. The traditional approach is to use ducking to control the interchange between the bass and the kick drum but this is not a perfect solution as one sound needs to be, in effect, attenuated to create the space for the other sound to explore and vice versa. With minimalistic mixes the ducking process is obvious and is easily heard and that is not what producers aspire to achieve. The secret to a good production is that processes must go unnoticed and only the results to be heard, and in the case of ducking hearing one sound above another is not the aim. The aim is for BOTH sounds to be heard at the same time and for that to be successfully achieved we need a different set of processes. A combination of compression and expansion yields a far stronger perceptive result than a simple ducking process. The technique involves downward compressing the fundamental of one sound and expanding its harmonics while expanding the fundamental of the other sound and downward compressing its harmonics. This sea-saw process fools the brain into thinking that both sounds are playing at the same time and is a highly effective solution for managing sounds that have shared frequencies and predominantly at the fundamental.

THE AIR BAND TRICK

In audio production, we producers are always looking for ways to trick the brain into thinking that something is there when it isn’t. Using reverb to denote space is one such example. But because we cannot use reverb with intimate productions we head straight for the air band and with the use of some clever trickery, we can fool the brain into thinking that the mix is airy, has acres of presence, and is sparkly. To work the air band, which invariably lies between 10-20 kHz, we need to process frequencies that actually exist within that range. Some producers add huge gain boosts to this range, even if there are no sounds actually present within the range, thinking this will magically create ‘air and space’. In fact, all this achieves is to give the high end a brittle and harsh texture. Once the air band range has been established gentle boosts or applying some form of harmonic excitation can really make this range stand out and sparkle. However, there is a very cool trick that I have been using with intimate productions and it works quite well in providing a sense of space and airiness without the use of reverbs.

The process uses a plugin that has a feature that I wish more manufacturers would adopt. The weapon I am referring to is FabFilter’s Pro Q3 and in particular the ‘split band’ feature. This allows the user to place an eq node anywhere on the audio’s frequency spectrum and to split the node into two further nodes; left and right. These nodes can now be moved to create a wider stereo image for the given frequency range. The trick is to find the perfect air band location. Simply sticking a node around the 10-12 kHz range does not magically create ‘presence’ and ‘air’. You need to locate the frequency range that contains high-frequency information to be able to truthfully use this feature. Let me show you how easy it is to achieve this brain foolery with the following example:

I am using a segment of an instrumental track (below) kindly sent to me by one of my students. The track is called Icarus and I have selected a section that I feel will benefit greatly from this air band trick.

(audio) icarus segment.mp3

I have imported the track into Cubase and inserted the wonderful FabFilter Pro Q3 equaliser plugin on the Icarus audio channel. The GUI’s spectrum analyser displays the frequency response of the audio and it is actually quite easy to locate the air band range just by viewing which frequencies are active at the upper end of the spectrum. Select a midway point between the start and end of the air band and place a node there. By using the band solo feature we can audition only the frequencies in this range and that makes it considerably easier to set the air band range than auditioning the whole spectrum.  I have left the Q factor (band-width) at its default value of 1.00 which affords a wide enough range to capture the air band. If you want to fine-tune the range all you need to do is to change the Q factor. Once you are happy with the air band range split the node into two further nodes; Left and Right. You can now separate and position the Left and Right nodes to taste as I have done below.

air band trick using fabfilter pro q3

(audio) icarus segment air band trick.mp3

You can clearly hear the difference this little trick has made to the overall texture of the mix. The audio sounds as if it has both presence and space even though we have not used reverb to define the space. In some ways, and this is only relevant when it comes to the air band, it sounds as if we have created a faux reverb – fake reverb.

LESS IS MORE

Intimate productions techniques only work if the mix is minimalistic. It is hard defining clarity, space, depth, and presence when many sounds are playing simultaneously. Whereas this is exactly what is required for producing pop music it is the exact appositive for this type of production. With pop music productions, it is not uncommon to have 100+ stem counts as stems/sounds are used for layering as opposed to having each sound sitting in its own space. The interest of the listener is maintained by the multi-layering process. In effect, each sound appears to have deep layers that the brain tries to evaluate and reference and this keeps the brain active and interested which is ultimately the aim of every producer – to keep the listener interested. Hip Hop production is the closest comparable for the type of processes covered in this article as Hip Hop leans towards minimalistic productions both in terms of the number of stems used and in how each stem is presented. The idea is to have a sparse bed for the rap and backing vocals to explore and reside in. Instrument sounds are kept to a minimum and only used to provide a contrast for the driving rap lines and sweet backing vocals. Intimate productions follow the same ethos but take it to a whole new level. Because reverb is regarded as hell-spawn, instrument sounds have to be processed with even more attention to detail. Call it ‘precision processing’. If each sound is not crafted to perfection then the errors jump at you with no consideration for your karma or street cred. It’s amazing how constricted a producer feels when reverb is removed from the equation. But I think it’s a good thing. It forces the producer to think outside the box and experiment with processes that they might not have used had reverb been the go-to effect for all things spatial. It is through these restrictions that new clever and innovative processes are discovered or created. When listening to a busy mix it is hard to isolate specific sounds and evaluate the frequency spectrum and dynamic behaviour each sound boasts. Sounds will overlap and either sum or mask each other. The dynamic motion of individual sounds are difficult to ascertain in a busy mix but with a sparse and minimalistic mix each and every sound is heard in its entirety and this is why it is imperative that each sound is optimised to as near to perfect as possible because errors are easily exaggerated in minimalistic mixes. In busy mixes errors can often go unnoticed but not so with intimate productions.

UP, DOWN AND AWAY

Whereas compression is used to narrow the dynamic range of audio, expansion is used to extend or widen the dynamic range. This may seem to contradict the advice from the 1 Up 1 Down section but it is actually an additional process that is performed on the whole mix as opposed to individual sounds. With mastering, engineers tend to use compression and limiting to homogenise a mix to display a good balance of volume against dynamics. Loudness is not the aim even though it seems to still be a pre-requisite for certain genres. Expansion, for some reason, tends to go unnoticed with the less professional mastering engineers, and yet it is such a powerful process. What is even more perplexing is that upward expansion seems to be used far more often than downward expansion. It seems that making the loud bits louder above a specified threshold is more appealing than making the quiet bits quieter below the threshold. I think this harks back to the loudness issue. We tend to think loudness affords more detail whereas in reality separation is a bigger influence. Although the result is the same when it comes to extending the overall dynamic range of the mix the perception of how the dynamics behave are very different. Which expansion mode you use is dependent on how you want the listener to perceive the ‘up close and personal’ attributes of the mix. If the mix is biased towards a narrow banded response with all the sounds coming across as ‘the same level’ then I might opt for upward expansion. By structuring the threshold such that only the peak transients are highlighted, upward expansion raises the peak transients whilst leaving everything below the threshold unaffected. The overall effect of this process is that the attack portions of the sounds now sit above the bodies which come across as ‘pronounced’. Generally, and I use this word tentatively, most peak transients tend to sit with the attack component of a sound as they define how velocity is applied to the sound. With vocals, for example, it is easy to understand this behaviour as plosives and sibilance tend to carry a lot of peak transients whereas the body of a word that follows the prominent attack will generally be quieter with less pronounced peak transients. However, I might not want to pronounce peak transients and instead aim to quieten the quieter elements of the mix to achieve an even more intimate and controlled response. Staying with the same threshold setting and using downward expansion the quiet parts are made even quieter whilst all the louder peak transients are left untouched and this translates across as more intimate and less distinct. Which mode is selected is really dependent on what you are trying to achieve and because both modes extend the dynamic range it really comes down to how you want the mix to be perceived. My advice is to use both modes, render the mix and listen to the overall responses, and gauge what each mode is doing to the audio using a spectrum analyser.

Let us look at an example that incorporates both compression and expansion but using middle and side. I often use this process when mastering mixes and it really does add another level of detail to a mix that conventional downward compression, static equalisation, and limiting don’t. Using the FabFilter Pro Q3 dynamic equaliser we will add presence and motion to a mix. I will stay with the Icarus audio file for now as we are familiar with it.

(audio) icarus segment.mp3

The aim is to solidify the low end of the mix (mid) using downward compression and add presence and motion to a wide range of upper frequencies (side) using upward expansion.

The process of locating where to place each of the two nodes is agonisingly simple. Use the spectrum analyser to display where the prominent low and high-frequency ranges lie. Create a node somewhere in the low-frequency range and duplicate the process for the high frequency. Solo the low-frequency band and listen to make sure you are capturing only low-end frequencies and not the mid-range frequencies. As a starting point, I generally place a node at 100 Hz and move it around, whilst auditioning in solo mode, until I have caught the most prominent low frequency and from there I create the range using the Q factor (band-width). Once you are happy with the node placement and frequency range click on the node and select Stereo Placement/Mid. Perform the same steps for the high-frequency range and this time select Placement/Side (below).

up and down m/s using fabfilter pro q3

(audio) icarus segment up and down.mp3

Now that we have successfully created the mid and side components we need to define each range’s behaviour. I have opted for downward compression for the mid element and upward expansion for the side element. Adding sparkle and presence to the side element helps to lift the mix and add both space and clarity. The fact that the mid element is compressed at the same time as the side element is expanded further pronounces the sides whilst keeping the low-frequency mid under control. This is a simple yet effective mastering process but please don’t feel you have to use the exact same behaviours as I have. Experiment with different behavioural modes, thresholds, split bands and so on until you achieve the texture you are after.

LOW END

The low end of a mix refers to the low-frequency content present within the mix and how these frequencies are processed. Many believe this is exclusive to the bass and kick but it actually refers to all frequencies that reside in what the producer determines to be the low end of the mix. Invariably, the low end of a mix sits in the 0-800 Hz frequency range, but this is not gospel as the low-frequency range of one mix might be very different to the another, and as vocals, synths, pad sounds etc share this range it is critical that this area is processed correctly.

It might seem strange picking low end as a subject for intimate production techniques but it is this area that reigns supreme for this type of production. With certain genres like Hip Hop, the two critical areas of processing are the low end and the vocals. It is the same with intimate productions. The low end acts as the bed for the mix and all other sounds sit in or hover around and above this bed. In effect, the low end anchors the mix and all other sounds are given free rein to explore the remaining frequencies.

The process I use for managing the low end for intimate productions is the exact opposite of what I do for individual sounds – upward expansion! The aim is not to reduce the dynamic range of the bass and kick sounds but to extend their dynamic range and specifically above the threshold. By setting the threshold to just below the peak transients and using upward expansion we can extend the peak transients further up the dynamic range ladder. The difference from quiet to loud is now extended and that helps to make certain low frequency sounds peak above the mix and then drop to below the average mix level to allow other sounds to explore the vacated, or attenuated, space. This is actually quite an important distinction to make; low-frequency sounds are highly intrusive in that they can completely dominate a mix if left at a constant level nearing the mix level, whereas mid to high-frequency sounds are not as intrusive and benefit from having a constant level as shown in the 1 Up 1 Down section. This is only relevant in the context of intimate productions. The idea is to have all sounds occupy a certain level range for the mix and the low frequency sounds to dip above and below this set range.

The best way to demonstrate this is to use the problematic Roland TR808 bass drum and a dynamic piano line and mix the two using upward expansion on the 808 to raise and drop it above and below the average mix level which in this case is set by the piano line.

(audio) piano riff 90 bpm.mp3

(audio) 808 kick 90 bpm.mp3

(audio) piano and 808 90 bpm.mp3

You can hear that the 808 is struggling to be heard above the piano line. Let us now insert the FabFilter Pro Q 3 on the master bus and use expansion to pronounce the 808 line. I have exaggerated the process so you can hear in detail how the 808 drops and peaks above the average mix level.

using M/S with fabfilter q3

(audio) piano and 808 expanded 90 bpm.mp3

The 808 drops nicely below the average mix level allowing the piano to be heard in full and when the expansion process kicks in the 808 rises above the average mix level and can be heard in its entirety.

This is a very cool approach in achieving separation whilst keeping detail and intimacy in check.

Tracking

Let me end this article by touching on what I feel is the most important aspect in achieving intimate productions; that of tracking. If we use Billy Eilish as an example you will note that all her recordings are sung into the microphone both closely and at a constant level. The overall texture is attributed to the room she sings in, and it might surprise you to know it is a bedroom with no acoustic treatment. The environment is critical when it comes to vocal tracking and we spend more time and money trying to achieve a natural sound that we sometimes forget to use the room’s qualities to our advantage as Eilish and Fiennes have done with their debut album. Microphone choice is as important as mic technique and the recording environment. I have achieved excellent results using both small and large-diaphragm condensers but results can vary depending on a number of factors; the recording environment, the preamp and mic technique. The one thing I have noticed, be it deliberate or by accident, is that Billy Eilish uses the proximity effect to her advantage whereas most producers try to remove it. Whether this is actually the proximity effect caused due to her using a cardioid mic at close quarters or the room’s acoustics is hard to tell but it is a wonderful way to add to the presence factor of her vocal deliveries. On occasion, I will deliberately record with the proximity effect but the trick is to control it, by varying both distance to mic and angle of mic, so as to achieve the right level of low-end presence as opposed to booming low-end mush. At other times I might opt for a large-diaphragm condenser as most microphones of this topology will invariably have what we refer to as a ‘vocal lift’; call it the microphone’s response or colour if you will, and it is the response of the microphone that is as important as the preamp and the recording environment, so choose your mics carefully for the given task, whatever it is.

I could dither (you see what I did there huh?) all day about intimate productions but what I have tried to achieve with this article is to give you an insight into some of the techniques we producers use to achieve intimate mixes. But as is the case with all things audio, experiment and find your own bespoke techniques and when you have share them with the world.

If you found this article to be of use, then these might also interest you:

Mixing Pop Music

Mixing Hip Hop

MixBus Strategies

For those that are just entering into the ‘write and produce’ industry, selecting the right sound card at the right budget can seem both daunting and complex. However, it’s not all doom n’ gloom. By following a few simple steps the experience can be made both painless and quite rewarding.

The first Prozac induced decision to make is which protocol do you need/want to use? Firewire, USB1/2/3/3.1/C, Thunderbolt 1/2/3, Gigabit Ethernet, PCIe, Klingon/Diana Troy Empath Interface (KDTIE) or whatever the hell the white coats are coming up with? I am not sure. Let’s see which drug is in fashion at the time and we’ll go from there. Actually, almost all jokes aside you can err on the side of caution and let the sound card make the decision for you. Suffice to say, most sound cards will use Firewire, PCIe, USB or Thunderbolt. Entering the debate as to which protocol serves best would not only confuse matters greatly but would also be a huge subject to cover in a quick byte article. Our industry is customisable in that you can build bespoke systems to run just about any protocol and in any configuration, so complicating this article would not be of help to the beginner.

What is important is which sound card would best suit your current requirements? So, let me make this process a little easier to digest and for that it’s best to run through a simple guide.

Connectivity

This is the biggie and the most important aspect when it comes to buying a sound card. Whereas driver reliability, preamp quality, etc all play a part in the decision making it is connectivity that tends to feature as a priority for most sound card purchases.

If you are working solely within your DAW and require no form of recording or auditioning of external hardware then the task has been simplified considerably. A simple stereo (2 in 2 out) device will suit your needs. Most modern sound cards will offer a stereo input that can also double up as two separate mic/line inputs and that can be very useful. But if you are after something a little more flexible then it might be worth thinking on a broader scale, and to avoid the minefield of over specifying your requirements, the most important question you need to ask yourself is ‘what do I want to record?’ This will determine what types of inputs you will need and how many.

Make a list of all the hardware that you want to be connected to the sound card. These should include outputs as well as inputs. Quite often, beginners will list all the inputs required but forget to accommodate for outputs. Be as thorough as you can when compiling the list as expandability in the future might be a critical requirement for your setup. If not, then add 1 of each to every single input/output requirement to the list, just in case. You never know if you might need the extra inputs/outputs. Once you have compiled your list you need to be very clear about the types of connections required for each device. Most manufacturers will list, in their manuals, the exact type of wiring required for various cables and connections, and it is usually up to the buyer to make sure the right connections and cables are used. I don’t want this single sentence to drive you to drink. Most cables are standard in configuration and should work out of the box on most sound cards. BUT, if in doubt, check the spec list provided in the sound card’s manual. They are usually very clear and concise and in many instances will actually draw a flow diagram explaining what connects to where and how, and provide you with a diagram outlining the changes needed for any connections used.

Once you have an idea of what needs connecting to the sound card you need to factor in flexibility versus price. This might seem a no brainer for those trying to acquire a budget sound card but you’d be surprised at how many ways there are to achieve masses of connections without having to sell a body part. Let us glimpse at ways you can save money and still have tons of routing options:

Mic Preamps

Nowadays, we are seeing commercial ready productions mushrooming out of home studios. That is thanks to the ever evolving improvements in technology coupled with the release of cost effective products. However, tracking (recording) is still an issue. Everyone wants to record instruments, vocals and so on in their homes, and when it comes to recording acoustic instruments you need a microphone and with that comes the inevitable ‘sigh, I need a preamp, oh, and what’s this phantom power thingy my mate’s on about?’ Thus the preamp debate ensues. If you have a high quality microphone do you compromise its performance with a sub standard preamp? I mention this because quite often a vocalist will buy a high grade microphone and run it through average preamps built into a budget soundcard. I have always been a firm believer that your signal path is as good as its weakest link and in the budget world, invariably, this will fall onto the limitations of the sound card. Nowadays, half decent microphones are very affordable and you can buy budget sound cards that afford decent quality preamps to run those microphones. Don’t make the mistake of overspending on a device that is limited by any of the multiple links within the signal path. Marry devices to each other sensibly.

Mic preamps (mic pres) are what we refer to as scaling purchases, in that the more quality preamps you require built into the sound card the more the price increases. Whereas the cost of 6 analogue line inputs against 2 analogue line inputs is not a huge hike in price, preamps don’t fare as well. A quality preamp requires good engineering and a single high quality preamp invariably costs more money than a string of average preamps packaged into a single sound card. Technology is continually improving and becoming more cost effective and over time this will change and we will see more and more sound cards affording multiple high quality preamps. In fact, we are already seeing this change in the industry. I use an Audient ID22 sound card and it features the same high-quality preamps as their popular ASP8024 console, and this is a £300 sound card, not exactly a lung Ebaying situation.

Because many people now record acoustic instruments at home the number of mic preamps required escalates. Something as simple as recording an acoustic guitar requires a minimum of 2 microphones, and because you will invariably use capacitor (condenser) microphones, you will need 2 preamps with phantom power (nominally run at 48V). That is not a problem as many budget sound cards will accommodate 2 preamps with phantom power. Now, imagine you want to mic up an acoustic drum kit (as opposed to an electronic drum kit) – you might need in excess of 4 microphones. Let’s complicate this scenario even further. You want to track an acoustic band live in a room – to capture the performance. Imagine how many microphones you will need. The best approach in addressing multiple mic recording scenarios is to get a dedicated mic preamp interface (strip) and connect that to your sound card using ADAT (Alesis Digital Audio Tape) or the Line Level Inputs. There are a few decent mic preamp interfaces that will have ADAT and can connect to the sound card using simple ADAT optical inputs.  Of course, you need something to take the analogue mic, instrument or line input and turn it into the digital ADAT output (A/D and or D/A), and vice versa if you want more analogue outputs. The Behringer ADA8200 gives 8 channels of mic preamps with phantom power, is a firm favourite and connecting it to your sound card via the optical input requires a single cable. Although preamp interfaces can connect to sound cards using Line Level Inputs, using ADAT with digital preamp interfaces gives direct access to 8 channels of analogue inputs into your DAW (Digital Audio Workstation).

However, let us assume you don’t want to go down this route and require something more flexible and offering more expansion possibilities. What do you do? The answer is cunningly simple – use a dedicated mixer. Before sound cards became affordable I used to use a simple stereo sound card connected to an analogue mixer. I used the preamps in the mixer as both mic and line inputs, had acres of send/returns and auxiliaries for running dynamics and effects, control over each and every channel both in terms of level and equalisation and so on. I connected the mixer to the Line Level Inputs of the sound card. Nowadays, mixers come with even better specifications and even include USB, internal effects, automation, and so on. They can be connected to the sound cards in a number of ways and this level of flexibility and routing makes it very appealing for budget setups.

Line Level Inputs

These inputs are ideal for connecting other devices like mixers (see above), CD/DVD Players, Synthesizers, stand-alone Preamps (non digital), audio amplifiers, effects, and dynamics. We differentiate line level inputs from mic preamps because microphones output a lower level signal and therefore need a preamp to bring the signal back up to line level. Instrument inputs can be fed into line level inputs but in some cases it might be better to use a DI Box (Direct Injection) and output that into the mic preamp input of the sound card. However, nowadays even budget sound cards offer dedicated instrument inputs, so you don’t need to worry too much about spending more money on acquiring DI Boxes. In fact, technology has advanced so much that we now have multiple use inputs on the same input. You can use mic, line and instrument feeds on a single input and at budget prices – the Focusrite Scarlett 212 3rd Gen being a perfect example of such technology, and it is ludicrously cheap. And again, let us not forget the mixer to sound card route is a powerful option to explore.

Line Level Outputs

Always factor in the outputs when specifying which sound card you want. I run two sets of monitors in my studio and use the generous number of line outs supplied by the Audient ID22 to connect to a set of Neumann 120As and a single Avantone Mixcube (for mono referencing). This is the most simplistic line outs requirement I can think of but try to bear in mind that we can often use the line ins and outs to patch in effects and dynamics – using an external analogue summing box being a prime example, or patching in an analogue mixer as discussed previously. However, most modern sound cards will come supplied with dedicated send/returns to connect and action effects/dynamics. This is extremely helpful as additional line ins and outs can be used to patch in all manner of hardware on top of the dedicated sends/returns.

Headphones

Most sound cards nowadays are perfectly adequate to run most headphones and will offer a clean enough signal. However, whether they are powerful enough for specific headphones is very dependent on the power handling of the output. Check the impedance values of the sound card specs and buy accordingly, but TBH, I have had no problems using the Audient headphone output to run my Audeze LCD-X headphones. If the headphone output provides less power feeding into higher impedance then consider a dedicated headphone amplifier. The number of headphone outputs is equally important if you intend on running multiple headphone feeds, but with budget sound cards you will be hard pressed to find more than 2 headphone outputs. If you find you need multiple headphone feeds it might be more prudent to invest in multiple channel headphone amp, and there are lots of those available at budget prices. Again, don’t forget the mixer route – they can provide cue monitoring for a number of performers and with detailed control.

MIDI (Musical Instrument Digital Interface)

Although most modern MIDI keyboards and MIDI controllers connect directly to the computer via USB there might be some instances whereby the additional MIDI ports are required. Many songwriters still use MIDI sound modules and keyboard synthesizers and if you feel the need to have even more MIDI ports it can pay to acquire a dedicated MIDI interface. However, most sound cards will come supplied with a MIDI In/Out and they should be sufficient if you are only running the odd MIDI device.

Digital

The beauty of digital input/output is that the audio does not have to pass through an additional stage of A/D (analogue to digital) conversion.

We have already touched on ADAT but what other digital connections are there?

S/PDIF (Sony Philips Digital Interface), is a two channel consumer format used predominantly for Hi-Fi equipment and utilises coaxial phono or, as we have seen already, Toslink optical connectors to connect to the sound card. DVD and CD players are popular examples of hardware that use this format.

RCA (Radio Corporation of America) is also referred to as ‘phono’ and is used to transmit audio and video signals. This type of connection is very common and can be found on many different consumer products: CD and DVD players, DJ decks, and so on. If the sound card is supplied with this type of connector then anything with an RCA output can be connected to it.

Word Clock

To successfully connect digital hardware to your sound card you will need to have Word Clock (WordClock) available on the sound card and all sound cards will have a built-in (internal) clock. This serves to connect and sync two or more digital devices together. In terms of buying a sound card, you need to be sure whether the sound card can alter the clock source and specify the sample rate used between the two or more clocked devices. With most bus-powered budget sound cards there are limitations as to the flexibility and configuration of the clock source. All sound cards will have internal clocks than can be used to slave to external devices that act as the master (ADAT being a prime example). So, read the sound card’s specs and decide if you will need to use the various clocking options available. If you intend on running a lot of digital devices, and all synced to a master clock, then it makes sense to acquire a dedicated clock that all the digital devices can sync to.

Monitor Controller

Whether you need a monitor controller is determined by how many monitors you will connect to the sound card and what level of control you want over each set of monitors. If you are running a modest home studio and do not have two sets of monitors then it really is a luxury and not something you need to spend extra money on. However, some sound cards, like the Audient ID22 I am using, will have a basic yet well specified monitor controller that I have found to be very useful in my setup.

Driver Reliability

I know many musicians/producers/studios that base their buying decisions on sound card driver reliability, and it is for this reason alone that many opt for the rock solid drivers of RME sound cards. I can’t blame them, especially if they earn a living from this vocation. With budget sound cards I don’t tend to dwell on this feature too much. After all, some USB sound cards are class-compliant and don’t need proprietary drivers. Mac/Windows come with generic USB drivers that act are specific for ‘plug and play’ devices.

If driver stability is critical to you then conduct exhaustive searches specific to your setup.

Control Panel and Effects/Dynamics      

Almost all sound cards incorporate hardware DSP (Digital Signal Processing) but do you need DSP Effects as well as the processing and routing functionality?

Sound cards come with software that is used as a matrix to configure the sound card. It is within this software, also known as the sound card’s control panel, whereby additional requirements can be specified – from selecting sample rates to specifying buffer sizes and so on. The control panel is not just a glorified routing matrix. It acts as the hub for all the features for the sound card.

Nowadays, some budget sound cards not only come with supplied DSP effects alongside a well detailed and specified control panel, but they will also often come with bundled software to help you to write and mix music. Whether you need DSP effects or are happy to run your own bespoke VST purchases or run external hardware effects/dynamics comes down to taste. However, there are strong reasons to consider built-in DSP effects. In live situations, cue back monitoring can include a modicum of reverb, or any effects for that matter, to help the performer realise a sense of space while performing. Or maybe you are conducting voice-over work and need a specific room ambiance to enhance your vocals. Maybe you are vlogging and need real-time de-essing and gating for your vocals….The list is endless.

There is nothing wrong in looking for sound cards that have built-in DSP effects. They can only be advantageous to the user. If you don’t then them, then don’t use them, but having them there ‘just in case’ is not a deal breaker and won’t add much to the sale price of the sound card.

Desktop or racked

I create video tutorials so for me having a desktop sound card is perfect for my needs. I can use the sound card’s volume controller as a monitor controller, adjust input and output gains simultaneously, DIM and CUT to taste, control the headphone output gain, patch in mics/external instruments, and so on, and all at my fingertips.

For musicians/producers that perform live or are on the go, a portable sound card might be a better solution.

Some home/ professional studios prefer to use a patch bay hard wired to the sound card and in that instance, they prefer the sound card to be out of sight and in a rack. All the routing is handled physically via the patch bay.  It really comes down to workflow and portability.

The only question you need to ask yourself is:

Do you need continual access to the sound card, and does it need to be portable, or are you happy to wire it up and leave it alone?

A/D – D/A

Many moons ago budget sound cards were truly dire…..and in every sense. The A/D and D/A were woeful and you would have to spend a small fortune to acquire a half decent sound card. Nowadays, and thanks to leaps in technological advancements, this is not an issue anymore. The quality and performance of converters in today’s budget sound cards exceed most mid-range sound cards from 10 years ago. So, don’t worry about whether your converters are good or not. Take it from me – THEY ARE.

Whether you want to eBay your remaining kidney to purchase a top-end converter is up to you. If the rest of the signal path is average then no amount of money spent on the converter will alter the fact that your signal path is er ‘average’. Always remember, your signal path is as strong as its weakest link.

I hope this article has been of help to you. I am sure there are many other areas I could have explored that would be tentatively relative to the subject matter. However, I feel the above to be the most important facets to consider when buying a budget sound card.

Now go make some cool music!

Eddie Bazil
Samplecraze
Audio Production Hub

We now need to work on the Super Chunks.

These fix into the 2 main corners of the room, directly in front and side of the bay area. These will be made from the damn Rockwool. The Chunks will be fixed from floor to ceiling and covering the whole corner areas.
As we bought exactly enough materials for this job, it is important to note that because Rockwool is not available in 1 metre x 1 metre dimensions in the UK anymore, we had to use some clever geometry, or rather Max did, to get the full-length Rockwool to be cut from 1 metre x 60 cm dimensions. This meant that we either had to lose the excess Rockwool from the corner shape cuts, or we had to use them. We chose to use them. So here is a diagram on how to cut the Rockwool slabs to make the triangular corner Chunks.

Read more

Acoustic Foam

So, we now know how to build a bass trap Panel.

Now the time has come to sort the foam out.

The hardest part of working with acoustic foam is handling it. Man, that stuff is so delicate that a single fingernail scratch can cost you a bundle. You think I’m kidding huh?

Acoustic foam is so light and fragile that we used Silicone Sealant to ‘glue’ the foam onto the ceiling. Why Silicone Sealant? Because you can scrape it off when you want to move the foams and leave no marks on your ceiling.
We also used the Silicone to ‘glue’ the foam onto the MDF panels for suspending from the ceiling. In fact, we used Silicone for all the foam fixing, be it directly onto the ceiling, walls or panels.
You can get Silicone Sealant from any hardware or DIY shop. Any standard sealant will do. You don’t need any fancy stuff. You can have it white or translucent, whichever suits you.

First off, we are going to look at the foam panels that will be suspended from the ceiling.

These are the 4 Sheets of Melamine Blue Foam. When taking these out of the packaging, you will find that they come in twos, one on top of the other.
DO NOT make the mistake of taking them apart, because if you do, then it will take you many moons to get them back together again. As we are using them as doubled up, there is no need to separate them.

Take out the foam from the box, making sure not to separate them from each other. Lay them onto the table and make sure they are aligned.Use the Silicone Sealant and seal the whole perimeter of the foam, from edges to dividing lines.
Use one of the vented MDF panels and put it on the foam, making sure to align and match the edges.It’s that simple. Now do this with the other one too.

Place the unused 2 ply on top of the first completed foam panel, and then place the second completed foam panel on top and leave to dry. Putting one on top of the other allows for a firmer seal and also prevents the panels from moving. A bit cool that huh?

Ok, let’s move on.

We used 2 Boxes of Melamine Procorner to use on the ceiling, wardrobe and all bay area edges. These were cut and joined to form a double-sided and semi (sort of) circular beefy foam.

Take the Procorners out of the packaging CAREFULLY. This is Box 1.Spray the sides, on both Procorners, but make sure you spray the correct sides. If you are unsure, then line them up as in the next picture. The adhesive must be applied to both Procorners as the adhesive is the type that seals when 2 sides, coated with adhesive are met.
Align the two Procorners so that they form the shape above. Leave to dry.

Finally, use the Silicone Sealant again and seal all around the Procorner where it meets the 2 plyboards. Be sure to leave this to dry and then pick up the Procorner to make sure it is stuck to the 2 plyboards. Remember, we are not using glue, just Sealant. As the foam is so light the Sealant should hold the foam onto the 2 plyboards. It better do, because I have 2 of these just above my damn head.

These ‘boarded’ Procorners will then be fixed to the ceiling using screws through the 2 ply, this is why we cut the 2 ply to a larger size than the foam, so as to allow for drilling etc. Finally, as I have done, paint the boards white so they match the ceiling unless you are a hippy and use funky colours in your house.

Now let’s get manly with the bay area edges, wardrobe and wardrobe corners.

To create the necessary shapes to fit the above, we had to cut and shape the Procorners…well, Max did.

The following needs to be done before you can start blading the foam. You need to measure the dimensions of where you want the corner foams to go. In my case, I had to make sure that the corner foams were at the edge of the bay area, joining both the ceiling and wardrobe wall. Your room might be shaped differently, so make sure you have your measurements correct before you start any cutting and Sealing.

Take a Procorner and cut to shape and size.In this instance, Max the Myth, created 2 Procorners, one for each corner above the bay area.
Because we have coving in the UK, the Procorner had to be shaped to accommodate the shape between the ceiling, corner wall, and the wardrobe.

Procorner shaped and cut to fit the corner.

Now we need to fix foam along the perimeter of the wardrobe where it meets the ceiling. We also need to fix foam within the wardrobe and mirror what we are doing on the outside.

We start by using the Procorners inside the wardrobe. These need to be sealed on the top and sides that meet the inner walls of the wardrobe and ceiling in the wardrobe. Hold till the Sealant dries.As the wardrobe is in sections, we have to cut the foam to shape and fix and hold into the wardrobe, all along its length.
Once the foam is in place, the outside of the wardrobe has to be addressed. The corner Procorner that was made earlier is now fixed into the corner.
As we did earlier, Seal the edges of the Procorner that will meet the ceiling and wardrobe wall. Fix into place.Place the foam gently into place making sure it is aligned properly.
As you can see from the picture, both the external and internal foams meet and are matched.
Keep going along the outer wardrobe wall and seal with Sealant. Make sure to align the inner and outer.Keep going till you reach the end of the wall.
Once you reach the end of the wall, use another Procorner, cut to shape to form the join that goes around the wall onto the next wall. This corner is now complete.
The wardrobe foam is now complete, both internally and externally. Now you need to fix the foam all along the bay area edge where the wall and ceiling meet.Keep fixing the foam onto the ceiling and bay area edge, using the Sealant as the adhesive. Keep these aligned and keep going until you end up at the corner of the bay area. This is where the next corner Procorner will sit.
Fix the last corner Procorner at the far end of the foam on the bay area ceiling and wall. This will be symmetrical along the bay area.

In part 3 we will look at the Super Chunks!

Part 3

 

It all began 6 months ago.

I decided it was time to move house, office and studio, all in one hit. No one will ever know why I made this decision. In fact, after some serious rehab, I came to understand that people, normal people, do not make these kinds of decisions without thinking things through. But then if I was normal, I would be an accountant or football player instead of being a topless male car washer dude in my spare time. So, after another night at the BDMA (bad decision makers anonymous), I decided to bite the bullet, tense my buttocks and get on with it.

6 months later and I live in a lovely rural pad with birds n’ shit chirping away and where people say hello, instead of blading your ass for your mobile like in the big city. What does all this dithering have to do with this tutorial? Jack. It’s a way of bonding. So, let’s bond a touch……not too much, just a touch.

Ok, so house almost complete and the only thing left is to build the office/home studio. When I say ‘build’, I actually mean ‘convert an existing bedroom into a studio but tell people you are building a studio because it sounds a lot cooler than saying I’m working from a bedroom’. When converting a bedroom into a studio, certain criteria have to be met and the most important of these is ‘make sure people sleeping in the bedroom have found alternative accommodation’. Get this right and you’re on your way.

Why?

I am often quite amazed at how little regard home studio owners give to the environment they work in. Surely, as a producer/engineer/artist, you want to be able to hear both clearly and accurately? Otherwise, your mixes will never sound right and you will struggle for ages to cross that line whereby your mixes will sound true and good on all systems. I cannot stress how important it is to have a properly treated listening environment.

I often come across people in this industry that do not think twice about dropping a couple of gees on a synth/workstation and yet will not spend a penny towards optimising the listening environment that they record and mix in. So, I got to thinking. Ok, these people won’t spend a penny beyond the ‘egg carton solution’, so why not write a DIY tutorial on home/studio acoustics and keep it below £1000.

Yes people! You read correctly. Treat your room with proper acoustical material and keep it all under a grand. To add weight to this tutorial, I chose the hard option. I decided to personally go through this tutorial hands-on, designing, building and applying the damn acoustics myself and I was only able to achieve this with the help of the Acoustic Guru Max Hodges (more later). Needless to say, my wife left me at this point. I also lost a lot of friends due to a lack of socialising. I am very alone now.

As with all DIY projects, one needs an able and helpful friend, preferably one with good acoustic treatment knowledge, a sense of humour and be very strong so I don’t have to carry the damn Rockwool myself. Enter the one known as Max Hodges, also known as Max the Magnificent, Max the Marvel and Miraculous Max. Max Hodges is an expert when it comes to studio builds, both in terms of design and project management. Max’s specialties include Acoustics and Sound Proofing, or Sound Treatment, full consultancy services, studio wiring and installs, and technology training. He is also a big bastard so very useful to have when debts need collecting

I would like to take this opportunity in thanking Max for all his help and guidance during this project. I am hoping that by thanking him in public I can forego the dinner I owe him for all the help he has provided.

The Room

Before we go headlong into this project and well before we start on materials required blah blah, we need to look at the room’s dimensions and shape. In the UK most 30’s houses have bay windows on the ground floor, and to add to this ‘shape issue’, the ceilings are usually ‘coved’. This means that you have a non-square or rectangular room and that there is coving where the ceiling meets the walls. The pictures that accompany this tutorial clearly show these characteristics.

I have included ’before’ and ‘after’ photos to show the state of the room prior to plastering etc. I have done this so as to raise a modicum of compassion in you so you can understand to NEVER EVER be at home when building work is done.

Before

Before

Bill the Plasterer

If you look at the first picture, you can see the coving on the top corner where the ceiling meets the walls. This can be a real headache when it comes to trying to put in corner traps etc as you need to shape the foam or Rockwool to accommodate the shape of the coving. The second picture shows the state of the bay area. A bay is pretty much what it says; a bay shaped like a semi-circle with windows encircling the bay and facing outwards of the room.

The third picture shows Bill the Plasterer. The scratches on the walls behind him are the result of his reaction after I told him what I was going to pay him. This has, of course, no damn bearing on this tutorial.

The shape of the room is the most crucial aspect of any acoustic treatment project. An irregular shaped room would be a nightmare to tame, but equally important is the fact that the room must be equally balanced to provide a true stereo image. There is no point in sitting against a wall with your speakers head height and hoping to get a natural stereo field if the room is not shaped and balanced correctly.

In the case of my room, I was faced with a bay area with windows running across the length of the bay, and a built-in wardrobe on one sidewall. This basically meant that I had to sit in the bay area and try to balance both sides of the room on either side of the bay. This all sounds good as bay areas often provide natural bass trapping, and hell, one cupboard? Let’s get those doors off! This sounded so easy…but it’s not just cupboard doors that need to come off. Shelves, rails, and wood, basically anything that impedes the sound travel or could resonate. The goal of this ‘butchering’ was to attain a central position for me to sit in with equal space on either side of me so that the sound would be stereo balanced. If I left more space on one side, the sound would be imbalanced. We are not just trying to achieve a well-treated acoustical environment but a balanced one as well. This is a mistake that so many home studio owners make. They cram themselves in one corner or side of the room and then wonder why there is bias to either channel in their mixes.

Once Max had had a good look and measured all the room’s dimensions, he came up with this diagram:

Ok, so a little explanation as to what is going on up there. This is an aerial plan, ie, bird’s eye view looking down on my ethnic ass.I am sitting right in front of the bay area and the ‘butchered’ cupboard is on my right (left as you look at the pic). Behind me is a wall with a door to it’s right. This wall is actually a chimney flue, where a fireplace once stood in grand and opulent fashion. This wall extends forward and this has caused that particular area to have a number of corners where the ‘flue’ meets the back wall of the room. On the right of the above picture you can see that Max has labeled the materials and their respective dimensions.

The Materials and What They Do

I am not going to go into what acoustic treatment is, how a room behaves, nodes, low end, blah blah. There are countless resources on the net that cover all manner of theory and application.
Nope, this tutorial is simply a DIY project for those that want to improve their home studio environment. I will, however, list the materials and state what their use is for. All foam materials came from RPG. These guys make top quality foam. Of course, there are many others and some are quite cheap, particularly US manufacturers. But I like RPG stuff, top quality, and design.

RPG material:

  • 4 Sheets of Melamine Blue Foam
  • 2 Boxes of Melamine Procorner
  • 2 Corner Blocks (300 x 300 x 300)

These will be used for acoustic absorption and diffusion tasks. These are highlighted as dark and light grey in the diagram.

We also bought:

  • 10 Tubes of Silicone Sealant to use for ‘gluing’ the foam onto the ceiling etc.
  • 2 x 2 ply boards (use preferably 6-9 mm in thickness), and 2 vented MDF boards.
  • 1 Can of Flooring Adhesive Spray.
  • 3 packs of chicken wire.
  • 4 pack of Hedex Extra Strength

Now the nasty stuff:

Rockwool :

45kg/M^3. These come in packs of 1 metre x 600cm x 10cm, and 2 in a pack. We bought 12 packs.

The general consensus is to use the standard 60kg/M^3, but we needed the double layer packs and decided to use the 45kg/M^3. You can use either, depending on your requirements.
The Rockwool will be used for all the Bass Traps, the Super Chunk corner traps and for further absorption and Limp Mass panels for the facing wall of the room.

Panels:

Materials required for building each panel, and there are 8 panels to build.

  • 2 x 2400 x 19 x 100 (typically 98) planed softwood battens
  • 2 x 2400 primed white MDF battens
  • 4 Triangular corner brackets
  • 4 Right angle corner brackets
  • 16 12mm wood screws, of size to match brackets
  • 3600 x 600 of Chicken wire.
  • 100+ Heavy Duty Staples.
  • 600 x 50 x 19 batten
  • 4000 x 1000 wrapping material. Beige raw cotton is what I used.
  • Garden Gloves
  • Wood Glue
  • Plant sprayer loaded with 10:1 water- PVA mix with 1 drop of washing up liquid.

This is used to cover a film of PVA to all the Rockwool, both for health reasons and for fixing in place. We use the PVA solution to bond the external fibers. Rockwool is nasty, fiberglass infested, airborne material that breaks and frays easily. The PVA keeps stray strands in place. BTW, if you touch Rockwool with bare skin, then expect to be itching for a few years. If you accidentally touch the damn thing then immediately wash your skin with cold water and refrain from any sexual activities until the itching has gone away. Trust me on this. I couldn’t wear boxer shorts for a week. That is, of course, another story for another day.

These panels are made to house the Rockwool for the bass traps, broadband absorption panels, and limp mass panel.

Tools required:

  • Electric Drill with Pilot hole bit, and Tank Cutter Bit (hole cutter)
  • Electric Screwdriver
  • MitreSaw
  • Heavy-Duty Staple Gun
  • Jigsaw
  • Corner clamps if building single-handed
  • Workbench

The Procedure – Let it Begin

To successfully design, build and integrate acoustic treatment, you need to PLAN. The diagram is the blueprint for how things should look, but it doesn’t always work out that way, as the more reference testing you do the more adjustments are needed. For this project, the blueprint is actually quite simple. Building the bastards is another story entirely. But let’s kick off with what is actually being made.

The Bass Trap panels will adorn every corner and will be connected at an angle to cover as much surface area as possible and with air gaps left behind for better trapping qualities. The broadband absorbers will also be built as panels and symmetrically placed away from all the walls (maintaining an air gap) and in between the bass traps. They will also face each other on opposing walls. The Super Chunks will be placed from floor to ceiling in the two main corners directly adjacent to the monitoring positions. The Foam will be placed on the ceiling, both suspended and fixed, in the surround area of the bay area and the corners where the ceiling meets the bay edge. The layout will be symmetrical for the purposes of maintaining balance and not triggering any bias.

Building The Damn Panels

Step 1

Take the two 2400 x 19 x 100 lengths of timber and mark up at 1800/600 (from either end)

Set the Mitre Saw to Cut at 45 degrees through the timber and, at right angles across the timber…Cut the timbers, so you have 2 pieces of 1800 in length, and two of 600. This is done so that the battens meet perfectly at the corners and edges so that they are flush and look funky.

Mitre Saw Placed on Workbench1800 length batten being cut at 45 degrees
2 x 1800 and 2 x 600 length battensLining up 2 battens for further cutting
After lining up the battens, screw 2 screws right through the two battens so as to keep them aligned and rigid. Then start to drill the holes in to create an empty groove. Space between the holes is subjective and down to you.On the longer 1800 battens, we cut 6 holes. The holes are created so we can then saw them to create an empty groove.
Max is sawing the battens between the holes to create the groovesThe panels with the grooves cut out.

The problem with most of these vocal removal plug-ins is the anomalies created through the process. There are a number of reasons why this process cannot be accomplished truthfully and completely.

However, before I go into a diatribe of why this method is not too great, it might actually help if I described the process.

1. Open the stereo file in an audio editor like Sound Forge, Wavelab etc.

2. Select and highlight one channel of the two channels (usually the right channel).

3. Select ‘Flip or Invert’ from the edit menu.

4. Now ‘Sum’ the file to Mono.

This process is also called the Karaoke Effect.

Basically, what is happening here is that you are inverting the polarity of one channel, also known as phase reversal, and then mixing it with the other channel and then mixing the two down to a single summed channel, and anything which is identical in both channels will cancel out, and this is why the vocals disappear.

Generally, in most recordings, the lead vocal, the kick drum and bass will invariably be recorded in the centre and these will disappear through the process above.

The problem with this method is that you cannot remove panned sounds, particularly the backing vocals that are panned across the stereo field, only the center field passes will be removed so you might actually be left with sections of the recorded material. Additionally, you need to remember that effects are often panned across the stereo field, particularly when dealing with vocals; reverbs come to mind. Sadly, the process and inherent algorithms could have more of a destructive effect than a very useful one and even more importantly: to remove vocals from a final mastered stereo mix would still not work properly as the effects and dynamics used to create the mix have their own transients. If you were to use a hardware mixer, with a global reverb running on the master stereo buss, and then you muted the source signals, you would still hear the wet signal that is the reverb signal, i.e. the processed signal. In a final mix, the algorithms would have to take into account all efx and dynamics used to extrapolate the vocal frequencies. It cannot be done without a destructive outcome to the other frequencies in the mix. The whole essence of efx is to give the illusion of space and spread. This colouration would also have to be accommodated for when coding as would any dynamics used. So your coding would now have to accommodate almost every type of process available plus recognise artifacts created during the process, along with the coding for the above.

However, ignore my whinging and try the process for yourself, but please bear in mind that the process of ‘keeping vocals’ and removing all else cannot happen because of the stereo panning of the other sources. For this to work, all the sound sources would need to be identical in both channels, with the vocals panned to one side.

Plugin used:

Sound Forge

Relevant content:

Total and Partial Phase cancellation

Using Phase Cancellation in Sound Design

Slicing, or chopping, samples is a process that is so common nowadays, and in most genres, that it has become recognised as a genuine engineering process. It is so accepted and widely used that most software manufacturers incorporate this function in their software designs.

The most common slicing software are Recycle, Phatmatik Pro, Guru, most DAWs and even audio editors allow for slicing of samples.

Let us examine what slicing actually is.

Simply put, slicing (chopping) is a process whereby a piece of audio recording is taken and cut into shorter segments. This then allows the user to use these segments (slices) to create their own arrangements in their compositions. The beauty of this process is one of versatility and flexibility. Additionally, you can drop Rex (the Recycle format) files into your audio sequencer and match them to any tempo without having to time-stretch etc.

Slicing was originally conceived to allow users to slice drum loops into smaller sound components (kicks, hi-hats, snare etc) and to then edit and rearrange the slices to create a new pattern from the original pattern. In fact, sampler manufacturers like Emu used their own generic function called the ‘Beat Munger’, which effectively sliced any sample and afforded the user to treat the slices like any other sample. Akai (MPC Chop Shop), Roland etc all have these functions incorporated into their hardware samplers/workstations nowadays, so it has become a tool that is almost mandatory to provide.

Software manufacturers were also on the scene at a very early stage and Propellerheads created possibly the most popular and used slicing software called Recycle. This became so popular that many manufacturers now allow for their software to import the Recycle format called REX. Rex files are simply slices with midi data attached to them. In effect, you can load the midi file that was used to trigger the slices into a pattern format, whilst simultaneously loading the slices.

Slicing

Recycle works in much the same way as audio editors when it comes to detecting and creating hitpoints. Hitpoints are merely markers that the user places in any part of the audio sample. In Cubase, when we wanted to extract a groove template, hitpoints were created by a process that searched and detected peak values. You do not have to rely solely on the software finding peak values and assigning hitpoints to them. You can manually input hitpoints anywhere you want. However, it is always beneficial to allow the software to search and mark the hitpoints. This saves time, particularly when dealing with a large audio sample with lots of peak values. You can then add or remove hitpoints to your heart’s content. Almost all of these software (and some hardware) will allow you to control the detail of the search and marking of these hitpoints using a function called ‘Sensitivity’. Sensitivity allows the software to search for lower peak values. In fact, it can detect and mark just about every single peak value, if that is what you want. The basic rule of thumb is; the higher the sensitivity value, the more slices you end up with.

But always bear in mind that your own audio editor will have slicing tools, and even sequencing packages with audio editing features will provide you with slicing tools. So, always check your software (or hardware) to make sure you have this function so as to save you having to spend money buying a dedicated slicing program. Personally, I use Recycle even though I have many other slicing software and hardware.

The following is a video tutorial showing how to use Propellerhead Recycle software:

Chopping/Slicing Beats Using Recycle

Quantise

Quantisation is the process of aligning a set of musical notes to conform to a grid. When you want to quantise a certain group of MIDI notes in a song, the program moves each note to the closest point on the grid. Invariably, the quantise value determines where on the grid the notes are moved to.

Swing: Allows you to offset every second position in the grid, creating a swing or shuffle feel. Swing is actually a great quantise weapon. It is most commonly used by the Hip Hop fraternity to compensate for the lack of a ‘shuffle’ feel to the beat. The amount of swing applied to the quantise is determined in percentages. The more swing, the higher the percentage applied.

It is important to remember that the slower the tempo of your track, the more syncopated the music will sound if low value quantise is used. This has caused problems for many songwriters and they usually compensate by using higher quantise values or working in double time (ie using a tempo of 140bpm for a song that is meant to be in 70bpm). Working in double time is the equivalent of using half the quantise value. For example, a song in 70bpm written in 140bpm can use a quantise value of 16, which would equate to using a quantise value of 32 when using the original 70bpm (beats per minute) tempo.

 The swing function allows for a more ‘offset’ feel when quantising and makes the music sound more human as opposed to robotic. In fact, swing is such a potent tool that the Dance heads are now using it to give a little life to the hi-hat fills etc.
Grid and type:

Grid allows you to pick a note length (for example: 1/4, 1/8, and so on) to use for the resolution, while Type sets a modifier for the note length: Straight, Triplet or Dotted.  I will not go into this as you would need to understand about note lengths etc, but what I will say is that the triplet is extremely handy when programming drums and particularly hi-hat patterns that require fast moving fills.

Random Quantise:

Another feature that can be useful to make your performances sound more in time without being completely mechanical is Random Quantise. Here you specify a value in ticks (120ths of sixteenth notes) so that when a note is quantised to the nearest beat specified by the other parameters in the quantise template, it is offset by a random amount from zero to the value specified by the Random Quantise setting. Basically, this takes away the rigidity of syncopated rhythms, particularly when dealing with hi-hats. It allows for a ‘random’ element to be used, much akin to a drummer’s human timing.

Most software will come with many additional tools to refine the quantise function and its settings. Humanise, iterative, freeze etc all go to giving the user more detailed editing power. For the sake of this e-Book, I am keeping it simple and only using the functions that most will adopt.

Preparation and Process

Last month we touched on the digital process.

This month we are going to talk about the preparation, the signal path, dos and don’ts and what some of the terminologies mean.

The most important part of the sampling process is preparation. If you prepare properly, then the whole sampling experience is more enjoyable and will yield you the optimum results.
Throughout this tutorial, I will try to incorporate as many sampler technologies as possible, and also present this tutorial side by side, using both hardware and software samplers.

So let us start with the signal path. Signal, being the audio you are recording and path, being the route it takes from the source to the destination.

The signal path is the path that the audio takes from it’s source, be it a turntable, a synthesizer etc, to its final destination, the computer or the hardware sampler. Nothing is more important than this path and the signal itself. The following list is a list of guidelines. Although it is a general guide, it is not scripture. We all know that the fun of sampling is actually in the breaking of the so-called rules and coming up with innovative ways and results. However, the guide is important as it gives you an idea of what can cause a sample to be less than satisfactory when recorded. I will list some pointers and go into more detail about each pointer.

  • The more devices you have in the signal path, the more the sample is degraded and coloured. The more devices in the path, the more noise is introduced into the path, and the headroom is compromised depending on what devices are in the path.
  • You must strive to obtain the best possible S/N (signal to noise ratio), throughout the signal path, maintaining a hot and clean signal.
  • You must decide whether to sample in mono or stereo.
  • You must decide what bit depth and sample rate you want to sample at.
  • You need to understand the limitations of both the source and destination.
  • You need to understand how to set up your sampler (destination) or sound card (destination) to obtain the best results.
  • You need to understand what it is that you are sampling (source) and how to prepare the source for the best sampling result.
  • If you have to introduce another device into the path, say a compressor, then you must understand what effect this device will have on the signal you are sampling.
  • You must understand what is the best way to connect the source and destination together, what cables are needed and why.
  • You need to calibrate the source and destination, and any devices in the path, to obtain the same gain readout throughout the path.
  • You need to understand the tools you have in the destination.
  • Use headphones for clarity of detail.

Basically, the whole process of sampling is about getting the audio from the source to the destination, keeping the audio signal strong and clean, and being able to listen to the audio in detail so you can pick out any noise or other artifacts in the signal.

In most cases, you can record directly from the source to the destination without having to use another device in the path. Some soundcards have preamps built into their inputs, along with line inputs, so that you can directly connect to these from the source. Hardware samplers usually have line inputs, so you would need a dedicated preamp to use with your microphone, to get your signal into the sampler. The same is true for turntables. Most turntables need an amp to boost the signal. In this instance, you simply use the output from the amp into your sampler or soundcard (assuming the soundcard has no preamp input). Synthesizers can be directly connected, via their outputs, to the inputs of the hardware sampler, or the line inputs of the soundcard.

As pointed out above, try to minimise the use of another device in the path. The reason is quite simple. Most hardware devices have an element of noise, particularly those that have built-in amps or power supplies. Introducing these in the signal path adds noise to the signal. So, the fewer devices in the path, the less noise you have. There are, as always, exceptions to the rule. For some of my products, I have re-sampled my samples through some of my vintage compressors. And I have done it for exactly the reasons I just gave as to why you must try to not do this. Confused? Don’t be. I am using the character of the compressors to add to the sample character. If noise is part of the compressor’s character, then I will record that as well. That way, people who want that particular sound, influenced by the compressor, will get exactly that. I have, however, come across people who sample with a compressor in the path just so they can have as strong and pumping signal as possible. This is not advised. You should sample the audio with as much dynamic range as possible. You need to keep the signal hot, ie as strong and as loud as possible without clipping the soundcard’s input meters or distorting in the case of hardware samplers. Generally, I always sample at a level 2 dBu below the maximum input level of the sampler or soundcard, ie 2 dBu below 0. This allows for enough headroom should I choose to then apply dynamics to the sample, as in compression etc. Part 1 of these tutorials explains dynamic range and dBs, so I expect you to know this. I am a vicious tutor, aren’t I? He, he.

My set up is quite simple and one that most sampling enthusiasts use.

I have all my sources routed through to a decent quality mixer, then to the sampler or my computer’s soundcard. This gives me great routing control, many ways to sample and, most important of all, I can control the signal better with a mixer. The huge bonus of using a mixer in the path and as the heart of the sampling path is that I can apply equalisation (eq) to the same source sample and record multi takes of the same sample, but with different eq settings. This way, by using the same sample, I get masses of variety. The other advantage of using is a mixer is that you can insert an effect or dynamic into the path and have more control over the signal, than just plugging the source into an effect unit or a compressor.

Headphones are a must when sampling. If you use your monitors (speakers) for referencing, when you are sampling, then a great deal of the frequencies get absorbed into the environment. So, it is always hard to hear the lower noise or higher noise frequencies, as they get absorbed by the environment. Using headphones, either on the soundcard, or the sampler, you only hear the signal and not the environment’s representation of the signal. This makes finding noise or other artifacts much easier.

The decision of sampling in mono or stereo is governed by a number of factors, the primary one being that of memory. All hardware samplers have memory restrictions, the amount of memory being governed by the make and model of the sampler. Computer sampling is another story entirely, as you are only restricted by how much ram you have in your computer. A general rule of thumb is: one minute of 44.1 kHz (audio bandwidth of 20 kHz using Nyquist theorem, which I covered in Part 1) sample rate audio, in stereo, equates to about 10 megabytes of memory. Sampling the same sampling rate audio in mono gives you double the time, ie 2 minutes, or takes up 5 megabytes of memory.

So, depending on your sampler’s memory restriction, always bear that in mind. Another factor that governs the use of mono over stereo is, whether you actually need to sample that particular sound in stereo. The only time you sample in stereo is if there is an added sonic advantage in sampling in stereo, particularly if a sound sounds fuller and has varying sonic qualities, that are on the left and right sides, of the stereo field, and you need to capture both sides of the stereo field. When using microphones on certain sounds, like strings, it is often best to sample in stereo. You might be using 3 or 4 microphones to record the strings, but then route these through your mixer’s stereo outputs or subgroups to your sampler or soundcard. In this case, stereo sampling will capture the whole tonal and dynamic range of the strings. For those that are on stringent memory samplers, sample in mono and, if you can tolerate it, a lower sampling rate. But make sure that the audio is not compromised.

At this point, it is important to always look at what it is that you are sampling and whether you are using microphones or direct sampling, using the outputs of a device to the inputs of the sampler or soundcard. For sounds like drum hits, or any sound that is short and not based on any key or pitch, like instrument or synthesizer sounds, keep it simple and clean. But what happens when you want to sample a sound from a particular synthesizer? This is where the sampler needs to be set up properly, and where the synthesizer has to be set up to deliver the best possible signal, that is not only clean and strong but one that can be easily looped and placed on a key and then spanned. In this case, where we are trying to sample and create a whole instrument, we need to look at multi-sampling and looping.

But before we do that, we need to understand the nature of what we are sampling and the tonal qualities of the sound we are sampling. Invariably, most synthesizer sounds will have a huge amount of dynamics programmed into the sound. Modulation, panning, oscillator detunes etc are all in the sound that you are trying to sample. In the case of analog synthesizers, it becomes even harder to sample a sound, as there is so much movement and tonal variances, that it makes sampling a nightmare. So, what do we do? Well, we strip away all these dynamics so that we are left with the original sound, uncoloured through programming. In the case of analog synthesizers, we will often sample each and every oscillator and filter. By doing this, we make the sampling process a lot easier and accurate. Remember that we can always program the final sampled instrument to sound like the original instrument. By taking away all the dynamics, we are left with simpler constant waveforms, that are easier to sample and, more importantly, easier to loop.

The other consideration is one of pitch/frequency. To sample one note is okay, but to then try to create a 5 octave preset presentation of this one sample would be a nightmare, even after looping the sample perfectly. There comes a point that a looped sample will begin to fall out of pitch and result in a terrible sound, full of artifacts and out of key frequencies. For each octave, the frequency is doubled. A way around this problem is multi-sampling. This means we sample more than one note of the sound, usually each third or fifth semitone. By sampling a collection of these notes, we can then have a much better chance of recreating the original sound accurately. We then place these samples in their respective ‘slots’ in the instrument patch of the sampler or software sampler, so a C3 note sampled, would be put into a C3 slot on the instrument keyboard layout. Remember, we do not need to sample each and every note, just a few, that way we can span the samples, ie we can use a C3 sample and know that it can still be accurate from a few semitones down to a few semitones up, so we spread that one sample down a few semitones and up a few semitones. These spread or zones are called keygroups. Emu call these zones and Akai call them keygroups. Where the sample ends, we put our next sample and so on, until the keyboard layout is complete with all the samples, this saves us a lot of hard work, in that we don’t have to sample every single note, but also gives us a more accurate representation of the sound being sampled. However, multi-sampling takes up memory. It is a compromise between memory and accurate representation that you need to decide on.

There are further advantages to multi-sampling, but we will come to those later. For sounds that are more detailed or complex in their characteristics, the more samples are required. In the case of a piano, it is not uncommon to sample every second or third semitone and also to sample the same notes with varying velocities, so we can emulate the playing velocities of the piano. We will sample hard, mid and soft velocities of the same note and then layer these and apply all sorts of dynamic tools to try to capture the original character of the piano being played. As I said, we will come to this later.

An area that is crucial is that of calibrating. You want to make sure that the sound you are trying to sample has the same level, as shown on the mixer’s meters, as the sampler’s meters or the soundcard’s meters. If there is a mixer in the path, then you can easily use the gain trims on the mixer, where the source is connected to, to match the level of the sound you want to sample, to the readout of the input meters of the sampler or the soundcard. If there is no mixer in the path, then you need to have your source sound at maximum, assuming there is no distortion or clipping, and your sampler’s or soundcard’s input gain at just below 0dBu. This is a good hot signal. If you had it the other way around, whereby the sound source level was too low and you had to raise the gain input of the sampler or soundcard, you would then be raising the noise floor. This would result in a signal with noise.

The right cabling is also crucial. If your sampler line inputs are balanced, then use balanced cables, don’t use phono cables with jack converters. Try to keep a reasonable distance between the source and destination and if you have an environment with RF interference, caused by amps, radios, antennae etc, then use shielded cables. I am not saying use expensive brands, just use cables correctly matched.

Finally, we are left with the tools that you have in your sampler and software sampler.

In the virtual domain, you have far more choice, in terms of audio processing and editing tools, and they are far cheaper than their hardware counterparts. So, sampling into your computer will afford you many more audio editing tools and options. In the hardware sampler, the tools are predefined.

In the next section, we will look at some of the most common tools used in sampling.

Additional content:

Preparing and Optimising Audio for Mixing

Normalisation – What it is and how to use it

Topping and Tailing Ripped Beats – Truncating and Normalising

Also known as MB or MBC.

These divide the incoming audio signal into multiple bands, with each band being compressed independently from the other.

The beauty of this is that with full band compressors the whole signal is treated, so when a peak is detected, the whole signal is compressed and so other frequencies are also subjected to compression.

Multiband compression only compresses the frequency bands chosen, so a more fluid and less abrupt result is gained. Instead of having one peak trigger the compressor into compressing the entire signal, the multiband allows for individual bands to be compressed. On some compressors, you even have the option of selecting bands that will not undergo any treatment. In essence, a multi-band compressor comprises of a set of filters that splits the audio signal into two or more frequency bands. After passing through the filters, each frequency band is fed into its own compressor, after which the signals are recombined at the output.

The main advantage of multi-band compression is that a loud event in one frequency band won’t trigger gain reduction in the other bands.

Another feature of the multiband compressor is that you are offered crossover points. This is crucial, as you are given control over where to place the frequency band. Setting these crossover points is the heart of the compressor and crucial in processing the right frequency spectrum with the right settings. For example: if you are treating the vocals in the mid-range but put your low-end crossover too far into the middle range, then the low-end compression settings will also affect the mid-range vocals.

Multiband compression can either be a friend or enemy. It all comes down to how you use it and when. It can be a great compressor for controlling problematic frequencies, or for boosting certain ranges in isolation to others. I tend to use them to rescue poor stereo mixes and with the aid of new features like crossover frequencies and threshold and ratios for each band, I can have more accurate processing.

However, use with care.

Relevant content:

Multiband Compression – what is it and how do you use it

Compression Masterclass