Quantise

Quantisation is the process of aligning a set of musical notes to conform to a grid. When you want to quantise a certain group of MIDI notes in a song, the program moves each note to the closest point on the grid. Invariably, the quantise value determines where on the grid the notes are moved to.

Swing: Allows you to offset every second position in the grid, creating a swing or shuffle feel. Swing is actually a great quantise weapon. It is most commonly used by the Hip Hop fraternity to compensate for the lack of a ‘shuffle’ feel to the beat. The amount of swing applied to the quantise is determined in percentages. The more swing, the higher the percentage applied.

It is important to remember that the slower the tempo of your track, the more syncopated the music will sound if low value quantise is used. This has caused problems for many songwriters and they usually compensate by using higher quantise values or working in double time (ie using a tempo of 140bpm for a song that is meant to be in 70bpm). Working in double time is the equivalent of using half the quantise value. For example, a song in 70bpm written in 140bpm can use a quantise value of 16, which would equate to using a quantise value of 32 when using the original 70bpm (beats per minute) tempo.

 The swing function allows for a more ‘offset’ feel when quantising and makes the music sound more human as opposed to robotic. In fact, swing is such a potent tool that the Dance heads are now using it to give a little life to the hi-hat fills etc.
Grid and type:

Grid allows you to pick a note length (for example: 1/4, 1/8, and so on) to use for the resolution, while Type sets a modifier for the note length: Straight, Triplet or Dotted.  I will not go into this as you would need to understand about note lengths etc, but what I will say is that the triplet is extremely handy when programming drums and particularly hi-hat patterns that require fast moving fills.

Random Quantise:

Another feature that can be useful to make your performances sound more in time without being completely mechanical is Random Quantise. Here you specify a value in ticks (120ths of sixteenth notes) so that when a note is quantised to the nearest beat specified by the other parameters in the quantise template, it is offset by a random amount from zero to the value specified by the Random Quantise setting. Basically, this takes away the rigidity of syncopated rhythms, particularly when dealing with hi-hats. It allows for a ‘random’ element to be used, much akin to a drummer’s human timing.

Most software will come with many additional tools to refine the quantise function and its settings. Humanise, iterative, freeze etc all go to giving the user more detailed editing power. For the sake of this e-Book, I am keeping it simple and only using the functions that most will adopt.

Preparation and Process

Last month we touched on the digital process.

This month we are going to talk about the preparation, the signal path, dos and don’ts and what some of the terminologies mean.

The most important part of the sampling process is preparation. If you prepare properly, then the whole sampling experience is more enjoyable and will yield you the optimum results.
Throughout this tutorial, I will try to incorporate as many sampler technologies as possible, and also present this tutorial side by side, using both hardware and software samplers.

So let us start with the signal path. Signal, being the audio you are recording and path, being the route it takes from the source to the destination.

The signal path is the path that the audio takes from it’s source, be it a turntable, a synthesizer etc, to its final destination, the computer or the hardware sampler. Nothing is more important than this path and the signal itself. The following list is a list of guidelines. Although it is a general guide, it is not scripture. We all know that the fun of sampling is actually in the breaking of the so-called rules and coming up with innovative ways and results. However, the guide is important as it gives you an idea of what can cause a sample to be less than satisfactory when recorded. I will list some pointers and go into more detail about each pointer.

  • The more devices you have in the signal path, the more the sample is degraded and coloured. The more devices in the path, the more noise is introduced into the path, and the headroom is compromised depending on what devices are in the path.
  • You must strive to obtain the best possible S/N (signal to noise ratio), throughout the signal path, maintaining a hot and clean signal.
  • You must decide whether to sample in mono or stereo.
  • You must decide what bit depth and sample rate you want to sample at.
  • You need to understand the limitations of both the source and destination.
  • You need to understand how to set up your sampler (destination) or sound card (destination) to obtain the best results.
  • You need to understand what it is that you are sampling (source) and how to prepare the source for the best sampling result.
  • If you have to introduce another device into the path, say a compressor, then you must understand what effect this device will have on the signal you are sampling.
  • You must understand what is the best way to connect the source and destination together, what cables are needed and why.
  • You need to calibrate the source and destination, and any devices in the path, to obtain the same gain readout throughout the path.
  • You need to understand the tools you have in the destination.
  • Use headphones for clarity of detail.

Basically, the whole process of sampling is about getting the audio from the source to the destination, keeping the audio signal strong and clean, and being able to listen to the audio in detail so you can pick out any noise or other artifacts in the signal.

In most cases, you can record directly from the source to the destination without having to use another device in the path. Some soundcards have preamps built into their inputs, along with line inputs, so that you can directly connect to these from the source. Hardware samplers usually have line inputs, so you would need a dedicated preamp to use with your microphone, to get your signal into the sampler. The same is true for turntables. Most turntables need an amp to boost the signal. In this instance, you simply use the output from the amp into your sampler or soundcard (assuming the soundcard has no preamp input). Synthesizers can be directly connected, via their outputs, to the inputs of the hardware sampler, or the line inputs of the soundcard.

As pointed out above, try to minimise the use of another device in the path. The reason is quite simple. Most hardware devices have an element of noise, particularly those that have built-in amps or power supplies. Introducing these in the signal path adds noise to the signal. So, the fewer devices in the path, the less noise you have. There are, as always, exceptions to the rule. For some of my products, I have re-sampled my samples through some of my vintage compressors. And I have done it for exactly the reasons I just gave as to why you must try to not do this. Confused? Don’t be. I am using the character of the compressors to add to the sample character. If noise is part of the compressor’s character, then I will record that as well. That way, people who want that particular sound, influenced by the compressor, will get exactly that. I have, however, come across people who sample with a compressor in the path just so they can have as strong and pumping signal as possible. This is not advised. You should sample the audio with as much dynamic range as possible. You need to keep the signal hot, ie as strong and as loud as possible without clipping the soundcard’s input meters or distorting in the case of hardware samplers. Generally, I always sample at a level 2 dBu below the maximum input level of the sampler or soundcard, ie 2 dBu below 0. This allows for enough headroom should I choose to then apply dynamics to the sample, as in compression etc. Part 1 of these tutorials explains dynamic range and dBs, so I expect you to know this. I am a vicious tutor, aren’t I? He, he.

My set up is quite simple and one that most sampling enthusiasts use.

I have all my sources routed through to a decent quality mixer, then to the sampler or my computer’s soundcard. This gives me great routing control, many ways to sample and, most important of all, I can control the signal better with a mixer. The huge bonus of using a mixer in the path and as the heart of the sampling path is that I can apply equalisation (eq) to the same source sample and record multi takes of the same sample, but with different eq settings. This way, by using the same sample, I get masses of variety. The other advantage of using is a mixer is that you can insert an effect or dynamic into the path and have more control over the signal, than just plugging the source into an effect unit or a compressor.

Headphones are a must when sampling. If you use your monitors (speakers) for referencing, when you are sampling, then a great deal of the frequencies get absorbed into the environment. So, it is always hard to hear the lower noise or higher noise frequencies, as they get absorbed by the environment. Using headphones, either on the soundcard, or the sampler, you only hear the signal and not the environment’s representation of the signal. This makes finding noise or other artifacts much easier.

The decision of sampling in mono or stereo is governed by a number of factors, the primary one being that of memory. All hardware samplers have memory restrictions, the amount of memory being governed by the make and model of the sampler. Computer sampling is another story entirely, as you are only restricted by how much ram you have in your computer. A general rule of thumb is: one minute of 44.1 kHz (audio bandwidth of 20 kHz using Nyquist theorem, which I covered in Part 1) sample rate audio, in stereo, equates to about 10 megabytes of memory. Sampling the same sampling rate audio in mono gives you double the time, ie 2 minutes, or takes up 5 megabytes of memory.

So, depending on your sampler’s memory restriction, always bear that in mind. Another factor that governs the use of mono over stereo is, whether you actually need to sample that particular sound in stereo. The only time you sample in stereo is if there is an added sonic advantage in sampling in stereo, particularly if a sound sounds fuller and has varying sonic qualities, that are on the left and right sides, of the stereo field, and you need to capture both sides of the stereo field. When using microphones on certain sounds, like strings, it is often best to sample in stereo. You might be using 3 or 4 microphones to record the strings, but then route these through your mixer’s stereo outputs or subgroups to your sampler or soundcard. In this case, stereo sampling will capture the whole tonal and dynamic range of the strings. For those that are on stringent memory samplers, sample in mono and, if you can tolerate it, a lower sampling rate. But make sure that the audio is not compromised.

At this point, it is important to always look at what it is that you are sampling and whether you are using microphones or direct sampling, using the outputs of a device to the inputs of the sampler or soundcard. For sounds like drum hits, or any sound that is short and not based on any key or pitch, like instrument or synthesizer sounds, keep it simple and clean. But what happens when you want to sample a sound from a particular synthesizer? This is where the sampler needs to be set up properly, and where the synthesizer has to be set up to deliver the best possible signal, that is not only clean and strong but one that can be easily looped and placed on a key and then spanned. In this case, where we are trying to sample and create a whole instrument, we need to look at multi-sampling and looping.

But before we do that, we need to understand the nature of what we are sampling and the tonal qualities of the sound we are sampling. Invariably, most synthesizer sounds will have a huge amount of dynamics programmed into the sound. Modulation, panning, oscillator detunes etc are all in the sound that you are trying to sample. In the case of analog synthesizers, it becomes even harder to sample a sound, as there is so much movement and tonal variances, that it makes sampling a nightmare. So, what do we do? Well, we strip away all these dynamics so that we are left with the original sound, uncoloured through programming. In the case of analog synthesizers, we will often sample each and every oscillator and filter. By doing this, we make the sampling process a lot easier and accurate. Remember that we can always program the final sampled instrument to sound like the original instrument. By taking away all the dynamics, we are left with simpler constant waveforms, that are easier to sample and, more importantly, easier to loop.

The other consideration is one of pitch/frequency. To sample one note is okay, but to then try to create a 5 octave preset presentation of this one sample would be a nightmare, even after looping the sample perfectly. There comes a point that a looped sample will begin to fall out of pitch and result in a terrible sound, full of artifacts and out of key frequencies. For each octave, the frequency is doubled. A way around this problem is multi-sampling. This means we sample more than one note of the sound, usually each third or fifth semitone. By sampling a collection of these notes, we can then have a much better chance of recreating the original sound accurately. We then place these samples in their respective ‘slots’ in the instrument patch of the sampler or software sampler, so a C3 note sampled, would be put into a C3 slot on the instrument keyboard layout. Remember, we do not need to sample each and every note, just a few, that way we can span the samples, ie we can use a C3 sample and know that it can still be accurate from a few semitones down to a few semitones up, so we spread that one sample down a few semitones and up a few semitones. These spread or zones are called keygroups. Emu call these zones and Akai call them keygroups. Where the sample ends, we put our next sample and so on, until the keyboard layout is complete with all the samples, this saves us a lot of hard work, in that we don’t have to sample every single note, but also gives us a more accurate representation of the sound being sampled. However, multi-sampling takes up memory. It is a compromise between memory and accurate representation that you need to decide on.

There are further advantages to multi-sampling, but we will come to those later. For sounds that are more detailed or complex in their characteristics, the more samples are required. In the case of a piano, it is not uncommon to sample every second or third semitone and also to sample the same notes with varying velocities, so we can emulate the playing velocities of the piano. We will sample hard, mid and soft velocities of the same note and then layer these and apply all sorts of dynamic tools to try to capture the original character of the piano being played. As I said, we will come to this later.

An area that is crucial is that of calibrating. You want to make sure that the sound you are trying to sample has the same level, as shown on the mixer’s meters, as the sampler’s meters or the soundcard’s meters. If there is a mixer in the path, then you can easily use the gain trims on the mixer, where the source is connected to, to match the level of the sound you want to sample, to the readout of the input meters of the sampler or the soundcard. If there is no mixer in the path, then you need to have your source sound at maximum, assuming there is no distortion or clipping, and your sampler’s or soundcard’s input gain at just below 0dBu. This is a good hot signal. If you had it the other way around, whereby the sound source level was too low and you had to raise the gain input of the sampler or soundcard, you would then be raising the noise floor. This would result in a signal with noise.

The right cabling is also crucial. If your sampler line inputs are balanced, then use balanced cables, don’t use phono cables with jack converters. Try to keep a reasonable distance between the source and destination and if you have an environment with RF interference, caused by amps, radios, antennae etc, then use shielded cables. I am not saying use expensive brands, just use cables correctly matched.

Finally, we are left with the tools that you have in your sampler and software sampler.

In the virtual domain, you have far more choice, in terms of audio processing and editing tools, and they are far cheaper than their hardware counterparts. So, sampling into your computer will afford you many more audio editing tools and options. In the hardware sampler, the tools are predefined.

In the next section, we will look at some of the most common tools used in sampling.

Additional content:

Preparing and Optimising Audio for Mixing

Normalisation – What it is and how to use it

Topping and Tailing Ripped Beats – Truncating and Normalising

Also known as MB or MBC.

These divide the incoming audio signal into multiple bands, with each band being compressed independently from the other.

The beauty of this is that with full band compressors the whole signal is treated, so when a peak is detected, the whole signal is compressed and so other frequencies are also subjected to compression.

Multiband compression only compresses the frequency bands chosen, so a more fluid and less abrupt result is gained. Instead of having one peak trigger the compressor into compressing the entire signal, the multiband allows for individual bands to be compressed. On some compressors, you even have the option of selecting bands that will not undergo any treatment. In essence, a multi-band compressor comprises of a set of filters that splits the audio signal into two or more frequency bands. After passing through the filters, each frequency band is fed into its own compressor, after which the signals are recombined at the output.

The main advantage of multi-band compression is that a loud event in one frequency band won’t trigger gain reduction in the other bands.

Another feature of the multiband compressor is that you are offered crossover points. This is crucial, as you are given control over where to place the frequency band. Setting these crossover points is the heart of the compressor and crucial in processing the right frequency spectrum with the right settings. For example: if you are treating the vocals in the mid-range but put your low-end crossover too far into the middle range, then the low-end compression settings will also affect the mid-range vocals.

Multiband compression can either be a friend or enemy. It all comes down to how you use it and when. It can be a great compressor for controlling problematic frequencies, or for boosting certain ranges in isolation to others. I tend to use them to rescue poor stereo mixes and with the aid of new features like crossover frequencies and threshold and ratios for each band, I can have more accurate processing.

However, use with care.

Relevant content:

Multiband Compression – what is it and how do you use it

Compression Masterclass

This subject has done the rounds for years.

Often it is the cash strapped home studio owner who has to resort to using headphones, the cheaper and space saving solution, instead of speakers to conduct mixing projects. There are obvious advantages to using headphones for mixing, but glaring disadvantages too. There are no winners here on either side of the fence. Quite simply, if you want to be fully armed to conduct the best mixes, then a combination of both is essential.

Good quality headphones can reveal detail that some good speakers/monitors omit. In terms of sound design, a good headphone is imperative as it will be unforgiving in revealing anomalies. In terms of maintaining a clean and noise free signal path, it is crucial. On the flip side, stereo imaging and panning information is much harder to judge on headphones. Determining the spatial feel of a mix is almost impossible to convey on headphones, but simple with speakers. Pans are pronounced and extreme on headphones and do not translate across well when used with speakers. Even EQ can come across as subdued or extreme.

I find that if I mix on headphones alone, then the mix never travels well when auditioned with monitors. The reverse is also true.

When using monitors and because the monitors are placed in front of us our natural hearing perceives the soundstage as directly in front of us. With headphones, because the ‘speakers’ are on either side of us, there’s no real front-to-back information. Headphones also provide a very high degree of separation between the left and right channels, which produces an artificially detailed stereo image. Our brains and ears receive and analyse/process sound completely differently when using headphones as opposed to monitors. When using headphones, each ear will only hear the audio signal carried on the relevant channel, but with speakers, both ears will hear the signals produced by both loudspeakers.

You also need to factor in the fact that different people perceive different amounts of bass – factors such as the distance between the headphone diaphragm and the listener’s ear will change the level of bass. The way in which the headphone cushion seals around the ear also play a part, which is why pushing the phones closer to your ears produces a noticeable increase in bass. This increases the bass energy and this alone negates the idea of having correct tonal balance in the mix being auditioned.

With monitors, both ears hear both the left and right channels.

If your room is acoustically problematic and you have poor monitors, then headphones may well be a better and more reliable approach. But it is a lot harder to achieve the same kind of quality and transferability that comes more naturally on good monitors in a good acoustically treated room.

I find that if I record and check all my signals with headphones, then I am in a strong position to hear any anomalies and be in a better position to judge clarity and integrity of the recorded signals. This, coupled with speaker monitoring, assures me of the best of both worlds; clarity and integrity married with spatial imaging.

If you want further reading on this subject then I recommend Martin Walker’s seminal article here entitled: Mixing On Headphones.

Briefly explained:

A filter allows you to remove unwanted frequencies and also allows you to boost certain frequencies. Which frequencies are removed and which frequencies are left depends on the type of filter you use.

Before we can list the different types of filters and what they do, there are a few terms and definitions we need to cover. These are crucial and are used all the time so it is important that you know what these terms are and what they mean.

Cut-off frequency This is the point (frequency) at which the filter begins to filter (block or cut out). The filter will lower the volume of the frequencies above or below the cut-off frequency depending on the type of filter used. This ‘lowering of the volume of the frequencies,’ is called Attenuation. In the case of a low pass filter, the frequencies above the cut off are attenuated. In the case of a high pass filter, the frequencies below the cut off are attenuated. Put simply: in the case of a low pass filter, we are trying to block the (higher) frequencies above a certain point and allow the lower frequencies through. In the case of a high pass filter, the opposite is true. We try to cut out or block frequencies below a certain point and allow the higher frequencies through. On analogue synthesizers, this cut-off was called the slope or gradient. The actual terminology was more accurately described as the RC(resistor/capacitor).

Analogues use circuitry and for that reason alone, it takes time for the filter to attenuate frequencies, in proportion to the distance from the cut-off point. Today’s technology allows for instant cut-off as the filter attenuation is determined by algorithms as opposed to circuits. That is why the filters of an Arp or Oscar etc, are so much more expressive and warm as they rely completely on the resistors and capacitors to, first warm-up, then to work but in a gradual mode(gradual meaning sloped or curved as opposed to instant). Depending on how well a filter attenuates or the way it attenuates gives us an idea of the type of sound we will achieve with an analogue filter. You often hear someone say ‘That Roland is warm man’ or ‘Man, is that Arp punchy’. These are statements that explain how Roland’s filters sound or how potent the Arp’s filters are. So, the speed at which the filter attenuates is called the slope or gradient.

Another point to raise now is that you will often see values on the filter knobs on analogue synthesizers that have 12dB or 24dB per octave. That basically means that each time the frequency doubles, the filter attenuates by 12dB or 24dB everything at that frequency. These are also known as 2 pole or 4 pole filters each pole represents 6dB of attenuation. This is how analogue circuits were built, the number of circuits being used by the filter to perform the task at hand.

If you delve into the filters that Emu provide on their synthesis engines, then it could go into pages, if I had to list them all. But for now, I am keeping it simple and listing the standard filter types and what they do.

Low Pass-LPF

As mentioned earlier, this filter attenuates the frequencies above the cut-off point and lets the frequencies below the cut-off point through. In other words, it allows the lower frequencies through and blocks the higher frequencies, below and above the cut-off (the frequency at which the filter begins to kick in). The low pass filter is one mutha of a filter. If you use it on a bass sound, it can give it more bottom and deep tones. If used on a pad sound, you can have the filter open and close or just sweep it and it gives that nice closing and opening effect. You can also use this filter cleverly by removing higher frequency sounds or noise that you don’t want in your sound or mix. Because it blocks out higher frequencies at the cut off you set, then it’s a great tool if you want to remove hiss from a noisy sample or, if you use it gently, you can remove tape or cassette hiss.

High Pass-HPF

This is the opposite of the low pass filter. This filter removes the frequencies below the cut-off and allows the frequencies above the cut-off through. Great for pad sounds, gives them some top end and generally brightens the sound. It’s also really good on vocals as it can give the vocals more brightness and you can also use it on any recordings that have a low-frequency hum or sound that is dirtying the sound, although, in this instance, it would be a limited tool, as you could also cut out the lower frequencies in the sound itself, but still a tool that has many uses.

Band Pass-BPF

This is a great filter. It attenuates frequencies below and above the cut-off and leaves the frequencies at the cut-off. It is, in effect, a low pass and a high pass together. The cool thing about this filter is that you can eliminate the lower and higher frequencies and be left with a band of frequencies that you can then use as either an effect, as in having that real mid-range type of old radio sound or use it for isolating a narrow band of frequencies in recordings that have too much low and high end. Sure, it’s now really made for that but the whole point of synthesis is to use tools because that’s what they are, tools. Breaking rules is what real synthesis is all about. Try this filter on synthesizer sounds and you will come up with some wacky sounds. It really is a useful filter and if you can run more than one at a time, and select different cut-offs for each one, then you will get even more interesting results.

Interestingly enough, bandpass filtering is used on formant filters that you find on so many softsynths, plugins, synthesizers and samplers. Emu are known for some of their format filters and the technology is based around bandpass filters. It is also good for thinning out sounds and can be used on percussive sounds as well as creating effects type of sounds. I often get emails from programmers wanting to know how they can get that old radio effect or telephone line chat effect or even NASA space dialogue from space to Houston. Well, this is one of the tools. Use it and experiment. You will enjoy this one.

Band Reject Filter-BRF-also known as Notch

This is the exact opposite of the bandpass filter. It allows frequencies below and above the cut-off and attenuates the frequencies around the cut-off point. Why is this good? Well, it eliminates a narrow band of frequencies, the frequencies around the cut-off, so, that in itself is a great tool. You can use this on all sounds and can have a distinct effect on a sound, not only in terms of eliminating the frequencies that you want to be eliminated, but also in terms of creating a new flavour to a sound. But its real potency is in eliminating frequencies you don’t want. Because you select the cut-off point, in essence, you are selecting the frequencies around that cut-off point and eliminating them. An invaluable tool when you want to hone in on a band of frequencies located, for example, right in the middle of a sound or recording. I sometimes use a notch filter on drum sounds that have a muddy or heavy midsection, or on sounds that have a little noise or frequency clash in the midsection of a sound.

Comb

The comb filter is quite a special filter. It derives its name from the fact that it has a number of notches at certain distances (delays), so it looks like a comb. The Comb filter differs from the other filter types, because it doesn’t actually attenuate any part of the signal, but instead adds a delayed version of the input signal to the output, basically a very short delay that can be controlled in length and feedback. These delays are so short that you only hear the effect rather than the delays themselves. The delay length is determined by the cut-off. The feedback depth is controlled by the resonance.
This filter is used to create a number of different types of effects, chorus and flange being two of the regulars. But the comb filter is more than that. It can be used to create some incredible dynamic textures to an existing sound. When we talk of combs, we have to mention the Waldorf synthesizers. They have some of the best comb filters and the sounds they affect are so distinct, great for that funky metallic effect or sizzling bright textures.

Parametric

This is also called the swept eq. This filter controls three parameters, frequency, bandwidth and gain. You select the range of frequencies you want to boost or cut, you select the width of that range and use the gain to boost or cut the frequencies, within the selected bandwidth, by a selected amount. The frequencies not in the bandwidth are not altered. If you widen the bandwidth to the limit of the upper and lower frequencies ranges then this is called shelving. Most parametric filters have shelving parameters. Parametric filters are great for more complex filtering jobs and can be used to create real dynamic effects because they can attenuate or boost any range of frequencies.

Well, I hope this has helped to demystify the confusing world of filters for you and I suggest that you ignore the filters on your synthesizers, be they hardware or software, at your own peril because they are truly powerful sound design functions. But if you want a whole book dedicated to equalisation and filtering then I suggest you have a look at EQ Uncovered – (second edition) This book has received excellent reviews and well worth exploring.

If you prefer the visual approach try this video tutorial:

Filters and Filtering – what are filters and how do they work

Most sampling enthusiasts usually sample a beat, audio piece or riff when they sample. Your sampler is so much more than that, and offers a wealth of tools that you rarely even knew existed, as they are kept so quiet, away from the ‘in your face’ tools.

This tutorial aims to open your eyes to what you can actually achieve with a sampler, and how to utilise what you sample.

This final tutorial is the real fun finale. I will be nudging you to sample everything you can and try to show you what you can then do to the sample to make it usable in your music.

First off, let us look at the method.

Most people have a nightmare when it comes to multi-sampling. The one obstacle everyone seems to be faced with is how to attain the exact volume, length of note (duration) and how many notes to sample.

The easy method to solve these questions in one hit is to create a sequence template in your sequencer. This entails having a series of notes drawn into the piano roll or grid edit of your sequencer. You can actually assign each and every note to be played at a velocity of 127 (maximum volume), have each note the exact same length (duration) and you can have the sequencer play each and every note or any number of notes you want. The beauty of this method is that you will always be triggering samples that are at the same level and duration. This makes the task of looping and sample placing much easier. You can save this sequence and call it up every time you want to sample.

Of course, this only works if you have a sequencer and if you are multi-sampling. For sampling the source directly, as in the case of a synth keyboard, it is extremely useful.

Creative Sampling

The first weapon in creative sampling is the ‘change pitch’ tool. Changing the pitch of a sample is not just about slowing down a Drum and Bass loop until it becomes a Hip Hop loop, a little tip there that some people are unaware of. It is about taking a normal sound, sampling it then pitching it right down, or up, to achieve a specific effect.

Let us take a little trip down the ‘pitch lane’.

You can achieve the pitch down effect by using the change pitch tool in your sampler, assigning the sample to C4 then using the C1 note as the pitched-down note, or time stretch/compress to maintain the pitch but slow or speed the sample. There is a crucial distinction here. Slowing down a sample has a dramatic effect on the pitch and works great for slowing fast tempo beats down to achieve a slower beat, but there comes a point where the audio quality starts to suffer and you have to be aware of this when slowing a sample down. The same is true for speeding a sample up. Speed up a vocal sample and you end up with squeaky vocals.

Time stretching/compressing is a function that allows the length of a sample to be changed without affecting the original pitch. This is great for vocals. Vocals sung for a track at 90 BPM can then be used in a track at 120 BPM without having to change the pitch. Of course, this function is as good as the software or hardware driving it. The better the stretching/compressing software/hardware is, the better the result. Too much of stretching/compressing can lead to side effects, and in some cases, that is exactly what is required. A flanging type of robotic effect can be achieved with extreme stretching/compressing, very funky.

A crucial function to bear in mind, and always perform, is that when you pitch a sample down, you then need to adjust the sample start time. Actually, this is a secret weapon that programmers and sound designers use to find the exact start point of a sample. They pitch the sample right down and this makes it much easier to locate the start point. You will often find that a sample pitched down a lot will need to have the start time cropped, as there will be dead air present. This is normal, so don’t let it worry. Simply check your sample start times every time you perform a pitch down.

Here are a few funky things to sample.

Crunching, flicking, hitting paper

Slowly crunch a piece of paper, preferably a thicker crispier type of paper, and then sample it. Once you have sampled it, slow it right down and listen to the sample. It will sound like thunderclaps. If you are really clever you can listen to the sample as you slow it down, in stages, until you hear what sounds like a scratch effect, before it starts to sound like thunderclaps. SCSI dump the samples into your computer, use Recycle or similar, and dump the end result back into your sampler as chopped segments of the original sample (please read ‘chopping samples’ and ‘Recycle tutorial’).

Big sheets of paper being shaken or flicked from behind can be turned into thunderous noises by pitching down, turning up and routing through big reverbs.

Spoon on glass

There are two funky ways to do this. The first is with the glass empty. Use an empty glass, preferably a wine glass, and gently hit it with a spoon. Hit different areas of the glass as this generates different tones. You can then slow these samples down till you have bell sounds, or keep them as they are and add reverb and eq to give tine type of sounds.

The second way of doing this is to add water to the glass. This will deaden the sound and the sample will sound a lot more percussive. These samples make for great effects.

Lighting a match

Very cool. Light a match, sample it and slow it down. You will get a burst effect or, being clever, use the attack of the match being lit sample and you will get a great snare sound, dirty and snappy.

Tennis ball against wood

Man, this is a very cool one. Pitch these samples down for kick and tom effects. You can get some really heavy kicks out of this sample. Actually, the ball hitting woody type of surfaces make for great percussive sounds.

Finger clicking

Trim the tail off the sample and use the attack and body of the sample. You now have a stick or snare sound. Pitch it down and you will have a deep tom burst type of effect. Or, use the sample of the finger click, cut it into two segments, the first being the attack and the body, the second being the tail end. Layer them together and you have a snare with a reverse type of effect.

Hitting a radiator with a knife

Great for percussive sounds. Pitched down, you get percussive bells, as opposed to bells with long sustain and releases. Also, if you only take the attack of this sample, you will have a great snare sound.

Kitchen utensil

These are the foundation for your industrial sounds. Use everything. First, drop them all on a hard surface, together. Sample that and slow it down a bit and you will have factory types of sounds. Second, drop each utensil on a hard surface and sample them individually. They make for great bell and percussive sounds. Scrape them together and sample them. Slowed down, they will give you great eerie industrial sounds and film sound effects. Metallic sounds, once pitched down, give more interesting undertones, so experiment.

Hitting a mattress with a piece of wood

This will give a deep muffled sound that has a strong attack. This makes for a great kick or snare. Slowed right down, you will achieve the Trancey type of deep kick.

Blowing into bottles

This gives a nice flute type of sound. Pitched down, you will get a type of foghorn sound. Blow into it short and hard and use the attack and body, you will achieve a crazy deep effect when pitched down.

Slamming doors

Slam away and sample. Thunderous sounds when pitched down. The attacks of the samples make for some great kicks and snares.

Aerosol cans

Great for wind and hi-hats. Slowed down, you will achieve wind type sounds. Used as pitched up, you get cabasa type of sounds. Run through an effect and pitched higher, you will achieve a hi-hat type of sound.

Golf ball being thrown at a wall

A snare sample that is great in every respect. Kept as is, you get a cool snare. Pitched up and you get a snappier snare. Pitched down, you get a deep tom, kick or ethnic drum sound.

Toys

Sample toys, preferably the mechanical and robotic ones. The number of sample variations you will get will be staggering. These mechanical samples once pitched down, make for great industrial sounds. Pitched up, they can make some great Star Wars type of sounds. Simply chopped up as they are, make for great hits, slams and so on.

Factories and railway stations

Take your recorder and sample these types of locations. It is quite amazing what you will find and once manipulated, the samples can be so inspiring.

Toilets, sinks, and bathtubs.

Such fun. Water coming out of a tap pitched down can be white water. Water dripping can be used in so many ways. Splashing sounds can be amazing when pitched up or down. Dropping the soap in a full bath and hitting the sidewalls of the bathtub when empty or even full, can create some of the best percussive sounds imaginable.

Radio

Sample your radio, assuming it has a dial. The sounds of searching for stations can give you an arsenal of crazy sounds. Pitched down you will get factory drones, swirling electric effects and weird electro tom sounds. The sound palette is endless.

I think you get the picture by now. Sample everything and store it away. Create a library of your samples. Categorise them, so that they are easy to locate in the future.

Now let us look at what you can do to samples to make them interesting.

Reverse is the most obvious and potent tool. Take a piece of conversation between a man and a woman, sample it and reverse it and, hey presto, you have the Exorcist.

Layer a drum loop with the reversed version of the loop and check it out. Cool.

Pitch the reversed segment down a semitone or two to create a pseudo doppler effect.

With stereo samples of ambient or melodic sounds, try reversing one channel for a more unusual stereo image. You can also play around with panning here, alternating and cross-fading one for the other.

Try sampling at the lowest bandwidth your sampler offers for that crunchy, filthy loop. This is lo-fi land. Saves you buying an SP1200..he..he.

Try deliberately sampling at too low a level, then using the normalising function repeatedly to pump the volume back up again. This will add so much noise and rubbish to your sample that it will become dirty in a funky way.

You can take a drum loop and normalise it continually till it clips heavily. Now Recycle the segments, dump them back into your sampler, and you have dirty, filthy, crispy Hip Hop cuts.

A sample doubles its speed when it’s transposed up an octave. So try triggering two versions of a sampled loop an octave apart, at the same time. With a percussive loop, you’ll get a percussion loop running over the top of the original.

Use effects on a loop, record it to cassette for that hissy flavour, then, resample it. Recycle the whole lot and drop the segments back into your sampler and you have instant effects that you can play in any order.

Layer and cross-fade pad samples so that one evolves/morphs into another.

Take a loop and reverse it. Add the reversed loop at the end of the original loop for some weirdness.

Multi triggering a loop at close intervals will give you a chorus or flange type of effect. Try it. Have the same loop on 3 notes of your keyboard and hit each note a split second after the other. There you go.

I could go on for pages but will leave you to explore and enjoy the endless possibilities of sampling and sound design.

Additional content:

Preparing and Optimising Audio for Mixing

Normalisation – What it is and how to use it

Topping and Tailing Ripped Beats – Truncating and Normalising

Noise Gate does exactly what it sounds like.

It acts as a gate and opens when a threshold is achieved and then closes depending on how fast a release you set, basically acting as an on-off switch.
It reduces gain when the input level falls below the set threshold, that is, when an instrument or audio stops playing, or reaches a gap where the level drops, the noise gate kicks in and reduces the volume of the file.

Generally speaking, noise gates will have the following controls:

Threshold: the gate will ‘open’ once the threshold has been reached. The threshold will have varying ranges (eg: -60dB to infinity) and is represented in dB (decibels). Once the threshold has been set, the gate will open the instant the threshold is reached.

Attack: this determines the speed of the gate kicking in, much like a compressor’s attack, and is usually measured in ms (milliseconds) and sub derivatives of. This is a useful feature as the speed of the gate’s attack can completely change the tonal colour of a sound once gated.

Hold: this function allows the gate to stay open (or hold) for the specified duration, and is measured in ms and seconds. Very useful particularly when passages of audio need to be ‘let through’.

Decay or release: this function determines how quickly the gate closes and whether it is instant or gradual over time. Crucial feature as not all sounds have an abrupt end (think pads etc).

Side Chaining (Key Input): Some gates (in fact most) will also have a side-chain function that allows an external audio signal to control the gate’s settings.

When the side-chained exceeds the threshold, a control signal is generated to open the gate at a rate set by the attack control. When the signal falls below the threshold, the gate closes according to the setting of the hold and release controls. Clever uses for key input (side-chaining) are ducking and repeat gated effects used in Dance genres. The repeated gate effect (or stuttering) is attained by key inputting a hi-hat pattern to trigger the gate to open and close. By using a pad sound and the hi-hat key input pattern you are able to achieve the famous stuttering effect used so much in Dance music.

Ducking: Some gates will include a ‘Ducking’ mode whereby one signal will drop in level when another one starts or is playing.The input signal, which is usually the signal that needs ducking, is sent to the key input (side-chain), and the gate’s attack and release times set the rate at which the levels change in response to the key input signal. A popular use for ducking is in the broadcasting industry whereby the DJ needs the music to go quiet so he/she can be heard when speaking (once the voice is used at key input and triggered then the music will drop in volume).

However, side-chaining (key input) and ducking are not all the gate is good for.

The most common use for a gate, certainly in the old days of analog consoles and tape machines, was to use the gate to remove ‘noise’. By selecting a threshold just above the noise level the gate would open to allow audio through above the threshold and then to close when required. This meant that certain frequencies and levels of noise were ‘gated’ out of the audio passage and thus cleaner.

BUT it doesn’t end there. There are so many uses for a noise gate, using an EQ unit as the key input for shaping audio lines and curing false triggers, for ducking in commentary situations (and still used today), for creative sonic mangling tasks (much like the repeat gate) and so on.

With today’s software-based gates we are afforded a ton of new and interesting features that make the gate more than a simple ‘noise’  gate.

Experiment and enjoy chaining effects and dynamics in series and make sure to throw a gate in there somewhere for some manic textures.

If you prefer the visual approach then try this video tutorial:

Noise Gate – What is it and how does it work

Normalisation is a digital signal processing function that’s available in a lot of digital audio editing software. It scans through the program material for the highest level (Peak value), and if that level doesn’t reach the maximum available dynamic range, the software boosts the overall signal so that the Peak hits the highest level possible. For example, suppose you record a track of music and the highest peak registers at 6dB below the maximum available headroom (in this case 0). Normalisation (to 0 ceiling) brings the entire track up by 6dB. (Incidentally, most normalisation functions allow normalising to some percentage of the maximum available level; it needn’t always be 100 %.) There are a couple of problems though:

• Because normalisation boosts the entire signal, the noise floor comes up as well.

• Excessive use of amplitude-changing audio processes such as normalisation on linear, non-floating-point digital systems can cause so-called ’round-off errors’ that, if allowed to accumulate, impart a ‘fuzzy’ quality to your sound. If you’re going to normalise, it should be the very last process — don’t normalise, then add EQ, then change the overall level, and then re-normalise, for example.

If you need to normalize then think carefully about whether you will use Peak or RMS (average level).

RMS (Root Mean Square) is an averaging process. The selected audio waveform is analysed and ALL peak values are summed and divided to create an average Peak Reference which then acts as the anchor for the normalising value. In other words, the peaks are added and then divided by the number of peaks and the signal is then processed using the new average Peak value.

Peak. The selected audio is analysed and the highest Peak Value acts as the new anchor reference. All processing works from this Peak value and when normalising this Peak value is used to raise the gain to the desired limit.

I tend to find that RMS (Root Mean Square) works best on long audio files that have varying peaks and troughs. Peak tends to work well on single-shot samples, much like drum hits. Using RMS Normalisation you will often find that the processed audio will be thicker and heavier in sound whereas Peak will retain the original envelope bar louder.

If you prefer the visual approach then give this video tutorial a try:

Normalisation – what it is and how to use it

In essence, noise is a randomly changing, chaotic signal, containing an endless number of sine waves of all possible frequencies with different amplitudes. However, randomness will always have specific statistical properties. These will give the noise its specific character or timbre.

If the sine waves’ amplitude is uniform, which means every frequency has the same volume, the noise sounds very bright. This type of noise is called white noise.

White noise is a signal with the property of having constant energy per Hz bandwidth (an amplitude-frequency distribution of 1) and so has a flat frequency response and because of these properties, white noise is well suited to test audio equipment. The human hearing system’s frequency response is not linear but logarithmic. In other words, we judge pitch increases by octaves, not by equal increments of frequency; each successive octave spans twice as many Hertz as the previous one down the scale. And this means that when we listen to white noise, it appears to us to increase in level by 3dB per octave.

If the amplitude of the sine waves decreases with a curve of about -6 dB per octave when their frequencies rise, the noise sounds much warmer. This is called pink noise.

Pink noise contains equal energy per octave (or per 1/3 octave). The amplitude follows the function 1/f, which corresponds to the level falling by 3dB per octave. These attributes lend themselves perfectly for use in acoustic measurements.

If it decreases with a curve of about -12 dB per octave we call it brown noise.

Brown noise, whose name is actually derived from Brownian motion, is similar to pink noise except that the frequency function is 1/(f squared). This produces a 6dB-per-octave attenuation.

Blue noise is essentially the inverse of pink noise, with its amplitude increasing by 3dB per octave (the amplitude is proportional to f).

Violet noise is the inverse of brown noise, with a rising response of 6dB per octave (amplitude is proportional to f squared).

So we have all these funky names for noise, even though you need to understand their characteristics, but what are they used for?

White noise is used in the synthesizing of hi-hats, crashes, cymbals etc, and is even used to test certain generators.

Pink noise is great for synthesizing ocean waves and the warmer type of ethereal pads.

Brown noise is cool for synthesizing thunderous sounds and deep and bursting claps. Of course, they can all be used in varying ways for attaining different textures and results, but the idea is simply for you to get an idea of what they ‘sound’ like.

At the end of the day, it all boils down to maths and physics.

 

Here is an article I wrote for Sound On Sound magazine on how to use Pink noise referencing for mixing.

And here is the link to the video I created on master bus mixing with Pink noise.

And here is another video tutorial on how to use ripped profiles and Pink noise to mix.

Jitter is the timing variation in the sample rate clock of the digital process. It would be wonderful to believe that a sample rate of 44.1 kHz is an exact science, whereby the process samples at exactly 44,100 cycles per second. Unfortunately, this isn’t always the case. The speed at which this process takes place usually falters and varies and we get the ‘wobbling’ of the clock trying to keep up with the speeds of this process at these frequencies. This is called jitter. Jitter can cause all sorts of problems and it is best explained, for you, as: the lower the jitter the better the audio representation. This is sometimes why we use better clocks and slave our sound cards to these clocks, to eradicate or diminish ‘jitter’ and the effects caused by it.

Jitter is a variation in the timing of the sampling instants (time-based) when the audio is converted to or from the digital domain. If the conversion process suffers from any time anomaly then the resulting signal amplitude will differ from its true value. Usual side effects are an increase in high-frequency noise, clicks and worst-case scenario muted and not working.  In simple terms, the clicks are caused when one of the digital devices searches for an incoming audio ‘sample’ but fails to find it as it is looking at the wrong time ‘frame’ (instance). Apart from these ‘anomalies’, the real-world audio effect is that the stereo imaging is compromised leading to a flat stereo image as opposed to one with depth and width.

Jitter affects the stability of the sample clock. The lower the jitter figure, the more stable the clock and the better the performance. This means that the lower the jitter values, the better the performance and the more stable the clock is.

When using more than one digital device it is best to interface and synchronize, using clock synchronization, between both the source and destination digital device.

Most of today’s digital systems will have embedded clock at source that can then be used to synchronize the two devices. In more sophisticated systems like DAWs, digital consoles, higher-end sound cards and so on, there will be some form of control panel whereby desired clock sources can be selected. The most common selections available are digital input, external word clock, and the internal clock. The selection comes down to system configuration and project choice. However, what is a given is that all digital devices must be synchronized.

Using the internal clock ensures stability as the clock rate is known, but this is where all devices must be synchronized to the internal clock’s rate. Alternatively, and a common choice amongst most studios, is to use a dedicated external clock. This affords a universal and global rate that all devices can be synchronized to, and more importantly, a dedicated master clock has one function and that can often alleviate system configuration problems. The only problem that arises from this scenario is that most consumer systems do not accommodate for slaving to external clocks and the internal clock will have to be the master clock source.

At the end of the day, it comes down to knowledge and experience and ignoring the benefits of a good clock source in a digitally configured system is the equivalent of running top-end processors through a Radio Shack budget 2 channel DJ mixer.