When we talk about distortion the image, invariably, conjured up is that of a guitarist thrashing his guitar with acres of overdrive. However, I am more interested in covering harmonic and non-harmonic distortion in subtle ways using non-linear systems rather than using a specific overdriven effect like guitar distortion or a fuzz box etc.

In an analog system overdriving is achieved by adding a lot of gain to a part of the circuit path. This form of distortion is more commonly related to overdriving a non-linear device. But it doesn’t end there as any form of alteration made to audio being fed into a non-linear device is regarded as distortion even though the term is quite a loose one and not too helpful. The idea is to create harmonic distortion and this is the area I want to explore in this chapter.

Harmonic distortion means that additional harmonics are added to the original harmonics of the audio being fed. As all sound carries harmonic content, and this is what defines its timbre, then it makes sense that any additional harmonics will alter the sound quite dramatically. Harmonic distortion is musically related to the original signal being treated and the sum of the added and original harmonics make up the resultant harmonics. The level and relative amounts of the added harmonics give the sound its character and for this we need to look at the two main types of harmonic distortion: odd and even order harmonics. The exception to this is digital distortion which sounds unpleasant and the reason for this is that the digital distortion is not harmonically related to the original signal.

Harmonics are simply multiples of the fundamental frequency of a sound and the addition of harmonics within a sound defines the sound’ timbre and character. Even order harmonics are even multiples of the source frequency (2, 4, 6, 8 etc) and odd-order harmonics (3, 5, 7, 9 etc) are multiples of the source frequency (fundamental).
Even order harmonics (2, 4, 6 etc) tend to sound more musical and therefore more natural and pleasing to the ear and higher levels of this can be used as the ear still recognises the ‘musical’ content. Odd order harmonics tend to sound a little grittier, deeper and richer and higher levels of this cannot be used as abundantly as even-order harmonics as the ear recognises the non-harmonic content and it results in an unpleasant effect. But there are uses for both and depending on how the harmonics are treated some wonderful results can be achieved.

Extract taken from the Creative Effects eBook

If you prefer the visual approach then try this video tutorial:

Harmonic Distortion – Odd and Even Harmonics

Whenever I have been called into a studio to assist a producer in managing frequencies for pre-mastering I have always been surprised at the fact that people seem to want to attribute a frequency range for the low end of a track. Every track has its own qualities and criteria that need addressing based on the entire frequency content of the track before a range can be attributed to the low end.

I have come across producers affording insights into some interesting low-end frequency ranges and these ranges are relevant only to the context that the track resides in. If we are talking about a heavy Hip Hop track that uses 808 kicks supplemented with sine waves then the low end of that track will vary dramatically to that of a mainstream EDM (electronic dance music) that will incorporate stronger kicks supplemented with ducked bass tones.

So, working on the premise of a frequency range will not help you at all. What is far more important is to understand both the frequencies required for the low end of a specific track and the interaction of these frequencies within themselves and the other elements/frequencies that share this particular range. This might sound strange: ‘within themselves’ but this is the exact area of the physics of mixing and managing low end that we need to explore. When we come to the chapters that pertain to both the harmonic content of a specific frequency range and the manipulation of those frequencies using advanced techniques then all will become clearer.

To fully understand how to manage low-end frequencies we need to look at frequencies, some of the problems encountered with manipulating frequencies, and some of the terminology related to it, in far more detail.


We use the term Timbre to describe the tonal characteristics of a sound. It is simply a phrase to distinguish the differences between different sounds and is not reliant on pitch or volume. In other words, two different sounds at the same frequency and amplitude do not signify that they are the same. It is the timbre that distinguishes the tonal differences between the two sounds. This is how we are able to distinguish a violin from a guitar.


However, to help you in understanding what this has to do with the low end it’s best to explain the first thing about sound, any sound, and that it is made up of sine waves at different frequencies and amplitudes. If you understand this basic concept then you will understand why some sounds are tonal and others are atonal, why a sampled kick drum might exhibit ‘noise’ as opposed to a discernible pitch and why a pure sine wave has no harmonic content.

To explain the diagrams below: I have drawn a simple sine wave that starts at 0, rises to +1 which we call the positive, drops to 0 and then drops below 0 to -1 which we call the negative. From 0 to +1 to 0 then to -1 and finally back to 0 is considered one complete cycle.

The phase values are expressed in degrees and lie on the x-axis. A cycle, sometimes referred to as a period, of a sine wave is a total motion across all the phase values.

This cycle is measured in Hertz (Hz) over 1 second and represents frequency. A good example of this is the note A4, which you have come across so many times. A4 is 440 Hz: this means that the waveform cycles 440 times per second (repeats itself) and this frequency represents pitch. If I jump to A5, which is one octave higher, I double the frequency 880 Hz. If I halve the A4 I get A3 (220 Hz) which is one octave lower.

Partial and total phase cancellations are critical to understand as I will be showing you how to use some very specific techniques to create new sonic textures using these concepts. Understanding that a sound has a timbre and that timbre can be expressed by partials which form, apart from the fundamental, both overtones and undertones is equally important as we will cover techniques in managing low frequencies without having to use the fundamental frequency of the sound. Additionally, when we come to managing shared frequencies (bass and drums) then the concept of harmonics is very useful as we are continually fighting the battle of clashing frequencies, frequency smearing, gain summing and so on. For example, sine waves have no harmonic content and therefore some dynamic processes yield no useful results and more specialised techniques are required. Whereas saw waveforms are rich in harmonics and therefore we are able to use pretty standard techniques to accent the sweet spots and eradicate artifacts.

I will now copy the same sine wave and phase offset (phase shift and phase angle) so you can see the phase values:

The shift value is set at 90 which denotes a phase shift of 90 degrees. In essence, the two waveforms are now 90 degrees out of phase.

The next step is to phase shift by 180 deg and this will result in total phase cancellation. The two waveforms together, when played and summed, will produce silence as each peak cancels out each trough.


When two shared (the same) frequencies (from different layers) of the same gain value are layered you invariably get a gain boost at that particular frequency. This form of summing can be good if intended or it can imbalance a layer and make certain frequencies stand out that were not intended to be prominent. A good way around this problem is to leave ample headroom in each waveform file so that when two or more files are summed they do not exceed the ceiling and clip.

If you take two sine waves of the same frequency and amplitude and sum them one on top of the other you will get a resultant gain value of 6dB.

Summing is important when dealing with the low end as any form of layering will have to take into account summed values.


When two shared frequencies are layered and one has a higher gain value than the other then it can ‘hide’ or ‘mask’ the lower gain value frequency. How many times have you used a sound that on its own sounds excellent, but gets swallowed up when placed alongside another sound? This happens because the two sounds have very similar frequencies and one is at a higher gain; hence one ‘masks’, or hides, the other sound. This results in the masked sound sounding dull, or just simply unheard. As we are dealing with low end this problem is actually very common because we are layering, in one form or another, similar frequencies.


The individual sinusoids that collectively form an instrument’s Timbre are called Partials also referred to as Components. Partials contain Frequencies and Amplitudes and, more critically, Time (please refer to my book on the subject of EQ – EQ Uncovered). How we perceive the relationships between all three determines the Timbre of a sound.


The Fundamental is determined by the lowest pitched partial. This can be the root note of a sound or what our ears perceive as the ‘primary pitch’ of a sound (the pitch you hear when a note is struck).


Using the fundamental as our root note, partials pitched above the fundamental are called overtones and partials pitched beneath the fundamental are called undertones, also referred to as Sub Harmonics. These partials are referred to, collectively, as Harmonics. This can be easily represented with a simple formula using positive integers:

f, 2f, 3f, 4f etc..

f denotes the fundamental and is the first harmonic. 2f is the second harmonic and so on.
If we take A4 = 440 Hz then f = 440 Hz (first harmonic and fundamental).
The second harmonic (overtone) would be 2 x 440 Hz (2f) = 880 Hz.

Sub Harmonics are represented by the formula: 1/n x f where n is a positive integer. Using the 440 Hz frequency as our example we can deduce the 2nd subharmonic (undertone) to be ½ x 440 Hz = 220 Hz and so on.

An area that can be very confusing is that of harmonics being overtones. They are not. Even-numbered harmonics are odd-numbered overtones and vice versa. The easiest way of looking at this, or rather, counting is to think of it as follows:

Let’s take the A4 440 Hz example:
If A4 is the fundamental tone then it is also regarded as the 1st Harmonic.
The 1st Overtone would then be the 2nd Harmonic.
The 2nd Overtone would be the 3rd Harmonic and so on…


Most musical sounds consist of a series of closely related harmonics that are simple multiples of each other, but some (such as bells and drums for instance) do contain partials at more unusual frequencies, as well as some partials that may initially seem to bear no relation to the fundamental tone, but we can go into more detail about these later on.

It is important to understand this concept as the area of tuning drum sounds and marrying and complimenting the frequencies with tonal basses, is an area that troubles most producers.

When managing low-end frequencies the phase relationships and harmonic content are more important than any other concept because of the limited frequency range we have to process, the nature of the sounds we are dealing with and the types of processing we need to apply.

I have often found frequency charts excellent for ‘normal’ acoustic instruments but a little hit and miss when it comes to synthetic sounds as these sounds will invariably contain a combination of waveforms and associated attributes that will vary dramatically from the standard pre-defined acoustical frequencies. However, ranges of this type can help as a starting point and some of the following might be helpful to you:

Sub Bass

This is the one frequency range that causes most of the problems when mixing low-end elements and for a number of reasons:

We tend to attribute a range starting from (about) 12 Hz to 60 Hz for this vital area. Although our hearing range has a ballpark figure 20 Hz – 20 kHz we can ‘feel’ energies well below 20 Hz. In fact, you can ‘hear’ the same energies by running a sine wave at high amplitude, but I don’t recommend that at all. In fact, we use these sub frequencies at high amplitudes to test audio systems. It is often said that cutting low end frequencies will brighten your mix. Yes, this is true. It is said that too much low-end energy will muffle and muddy up a track. Yes, this is also true. In fact, I cut out any redundant frequencies before I even start to mix a track. However, this is not the only reason we cut certain frequencies below the frequency we are trying to isolate and enhance and it has to do with the impact the lower end of this range has on using processors like compressors (more on this in later chapters).


I have seen some wild figures for this range as bass can encompass a huge range of frequencies depending on whether it is acoustic or synthetic. But the ‘going rate’ seems to be anywhere between 60 Hz all the way to 300 Hz. The reason this range is so critical is that most sounds, relevant to this low end, in your mix, will carry fundamentals and undertones in this range and will form the ‘boom’ of a track. This frequency range presents us with some of the most common problems that we will try to resolve in later chapters as so many frequencies reside in this range that their summed amplitudes alone will create metering nightmares.

We will deal with frequencies above these ranges when we come to working through the exercises otherwise it is simply a case of me writing another frequency chart and attributing descriptions for each range. I am only concerned with the relevance of these frequencies in relation to the low end and not for anything else.

Kick Drum

I find kick drum frequency ranges almost useless because in today’s music or the genres this book is concerned with, EDM and Urban, kick drums are multi-layered and in most cases samples as opposed to tuned acoustic kicks. So, remarks like ‘boost between 60 Hz – 100 Hz to add low end’, although a guide, is both misleading and unhelpful. We have a general rule in audio engineering/production: you cannot boost frequencies that are not there. Although most sounds will have a sensible frequency range we can use as a guide the kick drum is an entity on its own, simply because of the move away from using only acoustically tuned drum kits to sample-based content. Tonal synthetic kick drums are a different story entirely as the tone will have a pitch but layer that with other drum sounds and they can amass into one big mess if not handled sensibly. The TR 808, through design, behaves tonally but in quite a specific manner thanks to its clever oscillator Bridged T-network, Triggering and Accent.

To help you, purely as a guide, here is a basic chart outlining fundamental and harmonic ranges.

I have included some of the higher frequency ‘instruments’ like the Soprano voice so you can get an idea of the range of frequencies that we have to consider when mixing one frequency range in a track with another. As I said at the start of this chapter, low-end frequency ranges can only be assigned when the entire frequency content of a track is known otherwise it will be a process in isolation and when it comes to mixing that one frequency range with the rest of the track you will encounter problems in ‘fitting it in’.


I have covered the above in most of my eBooks and on my website www.samplecraze.com as part of my ongoing free tutorials. So, if you find the above a little overwhelming please feel free to explore my other books or head on over to my site and read at leisure.

Extract taken from the eBook Low End.

Relevant content:

Low End – what is Low End and how to Analyse it

Sinusodial Creation and Simple Harmonic Motion

Frequency and Period of Sound

Total and Partial Phase cancellation

What defines a good beat? Well, there is a term we use quite extensively when describing the overall ‘drive’ element of a track: ‘The Nod’. If you can nod to the rhythm of a song, then the beat works. The Nod actually refers to the flow of the beat, and the drive element constitutes the drum beat and bassline together. Because this book is about constructing beats, we will eliminate the bass from the equation. Bass, in itself, is a vast topic that I will cover at a later date when dealing with the low end of a track.

Most producers believe that a well-constructed beat, which has the Nod factor, comes down to two ingredients: the timing information of the whole beat and its constituents, and the dynamics of the individual components. In fact, there is far more to it than that. There are many factors that influence the flow of a drum beat and I will cover the most important ones.

I am Armenian, born in Iran, and have lived in other equally wondrous and safe havens like Lebanon and Kuwait. As a child, I had an obsession with sound, not exclusively music, but sound in its entirety. The diverse cultures to which I was exposed have afforded me the benefit of experiencing some exotic time signatures, dynamics, and timing elements. I always believed that the East held the title for advanced timing variations in music and obscure pattern structures, and for a while this was true. Today, we are blessed with a fusion of cultures and artistic practices. None are more infused with cross-cultural influences as the drum beats we incorporate in modern music.

Let’s break down the different areas that, collectively, form ‘The Nod’.

The Sounds

In dance-based music the choice of drum sounds is critical, and we have come a long way from processing live, acoustic kits into workable sounds that can live alongside a fast and driving BPM (beats per minute). Instead, we use drum samples and, in many cases, layer these samples with other samples and acoustic sounds. In the case of urban music, and the more defined and extreme sub-genre Hip Hop, we tend to go with samples from famous drum modules and drum samplers like the Emu SP1200, Roland TR808/CR78, and the MPC range—most notably the earlier versions such as the MPC60/3000.

The drum samples that we layer and process within a beat must meet very specific requirements. These include topping and tailing, mono/stereo, acoustic/noise/ tonal, and pitch/duration specifications. Let me briefly explain, ahead of the longer discussions later in this book:

  • Topping and Tailing: This process entails truncating a sample (removing dead space before and after the sample) and then normalising it (using Peak Normalisation to bring the sample’s amplitude/level up to 0dB). We do this for a number of reasons. Crucial considerations include sample triggering, aligning samples on a timeline, and referencing gains within a kit or beat.
  • Mono/Stereo: A drum sample that displays the same information on both channels is a redundant requirement unless the dual-channel identical information is required when layering using the ‘flip and cancel’ method. (Watch my video Art of Drum Layering Advanced, or read the article I wrote for Sound On Sound magazine entitled ‘Layers of Complexity’ for more information.) The only other instance where a stereo drum sample would be used is if the left and right channel information varies, as would be the case if a stereo effect or dynamic process were applied, or if the sample were recorded live using multi microphones, or if we were encoding/decoding mid/side recordings with figure-8 setups. We try to keep kick samples, in particular, in mono. This is because they remain in the center channel of the beat and, ultimately, the mix. For other samples like snares, claps, and so on, stereo can be very useful because we can then widen and creatively process the sample to taste.
  • Acoustic/noise/tonal: Acoustic drum sounds will invariably have been tuned at the playing and recording stages but will need to be re-tuned to the key of the track in which the beat lies. Tonal drum samples, like the legendary 808 kick drum, will also have to be tuned. More importantly, the frequency content of the sample will determine what type of dynamic processing can be applied. A sine-wave based tonal kick will have no harmonics within the waveform and will, therefore, be reliant on innovative dynamic processing techniques. Noise-based samples contain little or no tonal information, so require a different form of processing because the frequency content will be mainly atonal.
  • Pitch and Duration: Ascertaining and tuning atonal drum sounds is a nightmare for many, and this area is covered extensively in later chapters using specific tools and processes. Extending duration with pitch changes, altering pitch without altering duration, using time-stretching, and modulating pitch and/or duration using controllers and automation: all these are excellent forms of pitch manipulation.


  • Producers spend more time using the nudge feature and timeline of their DAW, refining timing information for beats, than on other time-variant processes. We have access to so many time-variant tools today that there really is no excuse to be unable to create either a tight and strict beat, or a loose and wandering beat, exactly as required. In fact, we have some nice workarounds and ‘cheats’ for those that have problems with timing issues, and I will cover these in more detail later.
  • Great timing in beat construction requires understanding several phenomena and techniques that I will explain in this book—BPM and how it relates to realistic timings for ‘played’ rhythms; Quantize, both in terms of divisions and how to alter these divisions; Ghost Notes and how they relate to perception; and Shadowing Beats, including the use of existing loops and beats to underlie, accent, and support the main beat. For example, if your drum beat is too syncopated and has little movement, you can reach for a Groove Quantize template in your DAW, or use other funky tools such as matching slice and hit points to existing commercial breaks.
  • The perception of a timing variance can be achieved in more than one way. Strangely enough, this leeway has been exhausted to death by Akai with the original Linn-designed pads and contacts. After the MPC 60 and 3000, Akai had no more timing variances in their hardware that could be attributed to ‘the MPC swing and sound’. Far from it. The timing of their DSP is rock solid. The timing of the pad’s initial strike, processed as channel pressure, note on/off and velocity curves, is what adds to the timing ‘delay’. This can be emulated on any pad controller that is sample-based because it is not hardware-specific. To further understand the perceptual formula, we need to look at the sample playback engine of all the top players. Bottom of the list lies Akai with their minimum sample count requirement, which demands so many cycles that if you truncate to a zero point sample start, the unit simply cannot cope with it. Add this ‘dead space’ requirement before a sample can be truthfully triggered to a pad that has inherent latency (deliberately designed by the gifted Roger Linn), and you end up with the ‘late’ and ‘loose’ feel of the MPCs. The sample count issue has now been resolved, and in fact, was corrected from the Akai 2500 onwards. I bring this up so that you are aware that there are very few magic boxes out there that pull out a tight yet loose beat. Nope. They all rely on physics to work. Yet, because of that requirement, we can work around the limitations and actually use them to our advantage. The MPCs have explored and exhausted these limitations quite successfully.
  • I love using pads to trigger drum sounds as it makes me feel more in touch with the samples than a mouse click or keyboard hit. The idea that drums must be ‘hit’ is not new, and the interaction that exists in the physical aspect of ‘hitting’ drum pads is one that makes the creative writing process far more enjoyable and ‘true’ to its origins. After all, the Maya didn’t have keyboard controllers. For this book, I will be using the QuNeo to trigger samples, but occasionally I will also trigger via the keyboard (Novation SLMK2), because spanning templates can be a little confusing for those that do not understand the manufacturers’ default GM templates.
  • Early and late processes in aligning beat elements are also a creative and clever workaround for improving static syncopated beats. Simple movements of individual hits using grid subdivisions can add motion to strict 4/4, 3/4 and 6/4 beats, which are the common signatures used in modern music.


  • Although we think of our brains as really smart organs, they are not actually that smart when it comes to deciphering and processing sight and sound. If you were to snap your fingers in front of your face, the sound would reach your brain via the ears before the visual information reaches your brain via the eyes. That may sound strange because light travels faster than sound, but it isn’t that strange when you take into account the time it takes the brain to decipher the different sensory input. In addition, the brain does not recognise frequency or volume without a reference. This is what memory is for: referencing. The brain has an instinctual response to already referenced frequencies and can turn off like a tap in a hurry when confronted with the same frequencies at the same amplitudes. However, when presented with the same frequencies at varying amplitudes the brain has to work to decipher and reference each new amplitude. This keeps the brain active and therefore interest is maintained. Next time you decide to compress your mix into a square wave because you think it will better ‘carry your mix’ across to listeners by rattling their organs, think twice. A narrow banded dynamic mix simply shuts the brain down, which then goes into ‘irritation mode’ because it has already referenced the constant amplitude for the frequency content in your track. The same processes take place when dealing with drum beats. The most interesting drum beats have acres of dynamic movement and do not rely on a single static amplitude for all the frequencies in the beat. Simple tasks, like altering the individual note velocities or amplitudes, will add huge interest to your beats. I would be surprised if Clyde Stubblefield maintained the same 127 velocity across all his hits whilst playing the drums.


  • Individual drum sounds can be layered to give both depth and width, resulting in a texture that can be both dynamic and interesting. If you need to delve into this area in more detail please refer to my book Art of Drum Layering, or the Advanced Drum Layering video which explores very specific layering techniques using phase cancellation, mid/side, and so on. But don’t confine yourself to drum sounds for layering. I have sampled and used kitchen utensil attacks, edited from individual amplitude envelope components, for the attack part of my snares and hi-hats, cardboard boxes close-miked with a large-diaphragm capacitor to capture the boom for kick bodies, and tapping on the head of a directional mic for some deep, breathy samples with which to layer my entire beats, and so on. If you can sample it, hell, use it!
  • Whole drum loops, treated as layers, can add vibrancy and motion to a static drum beat. Layering loops under a beat not only helps in acting as a guide for those that are not very good at drumming or creating grooves but also allows for some interesting new rhythms that will make the listener think you have incredible insight into beat making.
  • Layering tones beneath drum beats is an old and trusted method of adding low end. However, simply throwing a sine-wave under a beat doesn’t make it ‘have low-end’. You need to edit the waveform both in terms of frequency (pitch) and dynamics (in this instance: duration and velocity) and take into account the interaction between the low-frequency content of the beat and sine-wave along with the bass line. Many a Prozac has been consumed during the mix-down of this type of scenario.


  • Using modulators to create both motion and texture in a drum beat is not as hard as it may seem at first. The trick, as with all these processes, is to understand the tools and their limitations and advantages. For example, a low-frequency oscillator (LFO) triggering the filter cut-off using a fast ramp waveform shape can add a lovely squelchy effect to a clap sample. Another technique that I have often used is assigning a sine-shaped LFO at a low rate with filter resonance as its destination to run through the entire beat. I then layer this ‘effected’ version with the original dry beat. This gives the perception of tonal changes throughout the beat, even though it is not random.

Drum Replacement/Ripping Beats

  • Creative beat construction techniques using drum replacement and ripping beats include: substituting your own drum samples for drum sounds within a beat; using the timing information from an existing drum beat as a Quantize or groove template for your own beats; ripping both MIDI and dynamic data from an existing drum beat; and using two beats at different tempos, matching their data to create a new beat that combined drum elements from both beats.

Let’s now look at some of the techniques used to shape and hone drum beats into working ‘Nods’. I will try to incorporate as much of the above as possible into real-life exercises using examples of common chart hits. In terms of tools, I have found that a decent DAW, a capable pad controller, and a good all-round keyboard controller will cover the areas that we require. A pad controller is not crucial, but it does allow for more interaction and dynamic ‘feel’ (we all love to hit pads).

Extract taken from the eBook Beat Construction

I am often asked why I teach my students how to mix to a pink noise profile, be it at the channel stage, or pre-master prepping. The answer is simple: ‘the most important aspect of production is the understanding and management of relative levels.’

When I first began this wonderful and insane journey into audio production I was blessed to have had producer friends that were also my peers. In those ancient days, the industry was very different. The community aspect was both strong and selfless. We were not competing with each other. Instead, we chose to share our knowledge and techniques. It was then that I was introduced to noise as a mixing tool, and coupled with my sound design experience I took to it like a pigeon on a roof.

I was taught the old school method of tuning your ears and mindset to work from a barely audible mix level. If I did not hear everything in the mix then I had to go back and work the quieter channels. If something stood out in the mix then I knew the culprit had to be re-balanced, and all of this relied heavily on relative levels.

Relative levels in a mix context deals with the relationships between all sounds, and that includes effects and dynamics. You may think that relative levels refers only to volume but that is not entirely accurate. Relative levels deals with all level management, from sounds to effects and dynamics. Eq is an excellent example of frequency/gain management, but so are reverb levels, balancing parallel channels or wet/dry mix ratios, and so on……..

An example of how this technique helps the student to understand all areas of relative gains is by throwing in the classic reverb conundrum. We’ve all been there. If there is too much reverb level then the sound will either lose energy through reverb saturation, sound too distant if the wet and dry mix is imbalanced, or sound out of phase. By continual use of this technique, the student learns how well the sound and its effect sit together, whether the dry/wet ratio is right and whether the right reverb algorithms were used. This level of familiarity can only help the student and is the only simple working way of attuning the ears to not only hear level variances but also if something somewhere sounds ‘wrong ‘.

In some ways, this is very much like ear training but for producers as opposed to musicians/singers.

When I feel my students have reached an acceptable level conducting level and pan mixes (another old school apprentice technique), I move them onto pink noise referencing. By the time they have finished countless exercises using all manner of noise responses, they develop an instinctive understanding of gain structuring every aspect of the signal path, from channel to master bus, and with that comes an understanding and familiarity of what sounds natural and ‘right’.

Supplemented with listening to well-produced music this technique has saved my students both time and money and it is such a simple technique that even Trump could do it………well…..with help of course.

Eddie Bazil

If you prefer the visual approach then try this video tutorial:

Mixing to Pink Noise

Relevant content:

DIY Mastering using Pink Noise

The Different Colours of Noise