We now need to work on the Super Chunks.

These fix into the 2 main corners of the room, directly in front and side of the bay area. These will be made from the damn Rockwool. The Chunks will be fixed from floor to ceiling and covering the whole corner areas.
As we bought exactly enough materials for this job, it is important to note that because Rockwool is not available in 1 metre x 1 metre dimensions in the UK anymore, we had to use some clever geometry, or rather Max did, to get the full-length Rockwool to be cut from 1 metre x 60 cm dimensions. This meant that we either had to lose the excess Rockwool from the corner shape cuts, or we had to use them. We chose to use them. So here is a diagram on how to cut the Rockwool slabs to make the triangular corner Chunks.

Read more

Acoustic Foam

So, we now know how to build a bass trap Panel.

Now the time has come to sort the foam out.

The hardest part of working with acoustic foam is handling it. Man, that stuff is so delicate that a single fingernail scratch can cost you a bundle. You think I’m kidding huh?

Acoustic foam is so light and fragile that we used Silicone Sealant to ‘glue’ the foam onto the ceiling. Why Silicone Sealant? Because you can scrape it off when you want to move the foams and leave no marks on your ceiling.
We also used the Silicone to ‘glue’ the foam onto the MDF panels for suspending from the ceiling. In fact, we used Silicone for all the foam fixing, be it directly onto the ceiling, walls or panels.
You can get Silicone Sealant from any hardware or DIY shop. Any standard sealant will do. You don’t need any fancy stuff. You can have it white or translucent, whichever suits you.

First off, we are going to look at the foam panels that will be suspended from the ceiling.

These are the 4 Sheets of Melamine Blue Foam. When taking these out of the packaging, you will find that they come in twos, one on top of the other.
DO NOT make the mistake of taking them apart, because if you do, then it will take you many moons to get them back together again. As we are using them as doubled up, there is no need to separate them.

Take out the foam from the box, making sure not to separate them from each other. Lay them onto the table and make sure they are aligned. Use the Silicone Sealant and seal the whole perimeter of the foam, from edges to dividing lines.
Use one of the vented MDF panels and put it on the foam, making sure to align and match the edges. It’s that simple. Now do this with the other one too.

Place the unused 2 ply on top of the first completed foam panel, and then place the second completed foam panel on top and leave to dry. Putting one on top of the other allows for a firmer seal and also prevents the panels from moving. A bit cool that huh?

Ok, let’s move on.

We used 2 Boxes of Melamine Procorner to use on the ceiling, wardrobe and all bay area edges. These were cut and joined to form a double-sided and semi (sort of) circular beefy foam.

Take the Procorners out of the packaging CAREFULLY. This is Box 1. Spray the sides, on both Procorners, but make sure you spray the correct sides. If you are unsure, then line them up as in the next picture. The adhesive must be applied to both Procorners as the adhesive is the type that seals when 2 sides, coated with adhesive are met.
Align the two Procorners so that they form the shape above. Leave to dry.

Finally, use the Silicone Sealant again and seal all around the Procorner where it meets the 2 plyboards. Be sure to leave this to dry and then pick up the Procorner to make sure it is stuck to the 2 plyboards. Remember, we are not using glue, just Sealant. As the foam is so light the Sealant should hold the foam onto the 2 plyboards. It better do, because I have 2 of these just above my damn head.

These ‘boarded’ Procorners will then be fixed to the ceiling using screws through the 2 ply, this is why we cut the 2 ply to a larger size than the foam, so as to allow for drilling etc. Finally, as I have done, paint the boards white so they match the ceiling unless you are a hippy and use funky colours in your house.

Now let’s get manly with the bay area edges, wardrobe and wardrobe corners.

To create the necessary shapes to fit the above, we had to cut and shape the Procorners…well, Max did.

The following needs to be done before you can start blading the foam. You need to measure the dimensions of where you want the corner foams to go. In my case, I had to make sure that the corner foams were at the edge of the bay area, joining both the ceiling and wardrobe wall. Your room might be shaped differently, so make sure you have your measurements correct before you start any cutting and Sealing.

Take a Procorner and cut to shape and size. In this instance, Max the Myth, created 2 Procorners, one for each corner above the bay area.
Because we have coving in the UK, the Procorner had to be shaped to accommodate the shape between the ceiling, corner wall, and the wardrobe.

Procorner shaped and cut to fit the corner.

Now we need to fix foam along the perimeter of the wardrobe where it meets the ceiling. We also need to fix foam within the wardrobe and mirror what we are doing on the outside.

We start by using the Procorners inside the wardrobe. These need to be sealed on the top and sides that meet the inner walls of the wardrobe and ceiling in the wardrobe. Hold till the Sealant dries. As the wardrobe is in sections, we have to cut the foam to shape and fix and hold into the wardrobe, all along its length.
Once the foam is in place, the outside of the wardrobe has to be addressed. The corner Procorner that was made earlier is now fixed into the corner.
As we did earlier, Seal the edges of the Procorner that will meet the ceiling and wardrobe wall. Fix into place. Place the foam gently into place making sure it is aligned properly.
As you can see from the picture, both the external and internal foams meet and are matched.
Keep going along the outer wardrobe wall and seal with Sealant. Make sure to align the inner and outer. Keep going till you reach the end of the wall.
Once you reach the end of the wall, use another Procorner, cut to shape to form the join that goes around the wall onto the next wall. This corner is now complete.
The wardrobe foam is now complete, both internally and externally. Now you need to fix the foam all along the bay area edge where the wall and ceiling meet. Keep fixing the foam onto the ceiling and bay area edge, using the Sealant as the adhesive. Keep these aligned and keep going until you end up at the corner of the bay area. This is where the next corner Procorner will sit.
Fix the last corner Procorner at the far end of the foam on the bay area ceiling and wall. This will be symmetrical along the bay area.

In part 3 we will look at the Super Chunks!

Part 3


It all began 6 months ago.

I decided it was time to move house, office and studio, all in one hit. No one will ever know why I made this decision. In fact, after some serious rehab, I came to understand that people, normal people, do not make these kinds of decisions without thinking things through. But then if I was normal, I would be an accountant or football player instead of being a topless male car washer dude in my spare time. So, after another night at the BDMA (bad decision makers anonymous), I decided to bite the bullet, tense my buttocks and get on with it.

6 months later and I live in a lovely rural pad with birds n’ shit chirping away and where people say hello, instead of blading your ass for your mobile like in the big city. What does all this dithering have to do with this tutorial? Jack. It’s a way of bonding. So, let’s bond a touch……not too much, just a touch.

Ok, so house almost complete and the only thing left is to build the office/home studio. When I say ‘build’, I actually mean ‘convert an existing bedroom into a studio but tell people you are building a studio because it sounds a lot cooler than saying I’m working from a bedroom’. When converting a bedroom into a studio, certain criteria have to be met and the most important of these is ‘make sure people sleeping in the bedroom have found alternative accommodation’. Get this right and you’re on your way.


I am often quite amazed at how little regard home studio owners give to the environment they work in. Surely, as a producer/engineer/artist, you want to be able to hear both clearly and accurately? Otherwise, your mixes will never sound right and you will struggle for ages to cross that line whereby your mixes will sound true and good on all systems. I cannot stress how important it is to have a properly treated listening environment.

I often come across people in this industry that do not think twice about dropping a couple of gees on a synth/workstation and yet will not spend a penny towards optimising the listening environment that they record and mix in. So, I got to thinking. Ok, these people won’t spend a penny beyond the ‘egg carton solution’, so why not write a DIY tutorial on home/studio acoustics and keep it below £1000.

Yes people! You read correctly. Treat your room with proper acoustical material and keep it all under a grand. To add weight to this tutorial, I chose the hard option. I decided to personally go through this tutorial hands-on, designing, building and applying the damn acoustics myself and I was only able to achieve this with the help of the Acoustic Guru Max Hodges (more later). Needless to say, my wife left me at this point. I also lost a lot of friends due to a lack of socialising. I am very alone now.

As with all DIY projects, one needs an able and helpful friend, preferably one with good acoustic treatment knowledge, a sense of humour and be very strong so I don’t have to carry the damn Rockwool myself. Enter the one known as Max Hodges, also known as Max the Magnificent, Max the Marvel and Miraculous Max. Max Hodges is an expert when it comes to studio builds, both in terms of design and project management. Max’s specialties include Acoustics and Sound Proofing, or Sound Treatment, full consultancy services, studio wiring and installs, and technology training. He is also a big bastard so very useful to have when debts need collecting

I would like to take this opportunity in thanking Max for all his help and guidance during this project. I am hoping that by thanking him in public I can forego the dinner I owe him for all the help he has provided.

The Room

Before we go headlong into this project and well before we start on materials required blah blah, we need to look at the room’s dimensions and shape. In the UK most 30’s houses have bay windows on the ground floor, and to add to this ‘shape issue’, the ceilings are usually ‘coved’. This means that you have a non-square or rectangular room and that there is coving where the ceiling meets the walls. The pictures that accompany this tutorial clearly show these characteristics.

I have included ’before’ and ‘after’ photos to show the state of the room prior to plastering etc. I have done this so as to raise a modicum of compassion in you so you can understand to NEVER EVER be at home when building work is done.



Bill the Plasterer

If you look at the first picture, you can see the coving on the top corner where the ceiling meets the walls. This can be a real headache when it comes to trying to put in corner traps etc as you need to shape the foam or Rockwool to accommodate the shape of the coving. The second picture shows the state of the bay area. A bay is pretty much what it says; a bay shaped like a semi-circle with windows encircling the bay and facing outwards of the room.

The third picture shows Bill the Plasterer. The scratches on the walls behind him are the result of his reaction after I told him what I was going to pay him. This has, of course, no damn bearing on this tutorial.

The shape of the room is the most crucial aspect of any acoustic treatment project. An irregular shaped room would be a nightmare to tame, but equally important is the fact that the room must be equally balanced to provide a true stereo image. There is no point in sitting against a wall with your speakers head height and hoping to get a natural stereo field if the room is not shaped and balanced correctly.

In the case of my room, I was faced with a bay area with windows running across the length of the bay, and a built-in wardrobe on one sidewall. This basically meant that I had to sit in the bay area and try to balance both sides of the room on either side of the bay. This all sounds good as bay areas often provide natural bass trapping, and hell, one cupboard? Let’s get those doors off! This sounded so easy…but it’s not just cupboard doors that need to come off. Shelves, rails, and wood, basically anything that impedes the sound travel or could resonate. The goal of this ‘butchering’ was to attain a central position for me to sit in with equal space on either side of me so that the sound would be stereo balanced. If I left more space on one side, the sound would be imbalanced. We are not just trying to achieve a well-treated acoustical environment but a balanced one as well. This is a mistake that so many home studio owners make. They cram themselves in one corner or side of the room and then wonder why there is bias to either channel in their mixes.

Once Max had had a good look and measured all the room’s dimensions, he came up with this diagram:

Ok, so a little explanation as to what is going on up there. This is an aerial plan, ie, bird’s eye view looking down on my ethnic ass.I am sitting right in front of the bay area and the ‘butchered’ cupboard is on my right (left as you look at the pic). Behind me is a wall with a door to it’s right. This wall is actually a chimney flue, where a fireplace once stood in grand and opulent fashion. This wall extends forward and this has caused that particular area to have a number of corners where the ‘flue’ meets the back wall of the room. On the right of the above picture you can see that Max has labeled the materials and their respective dimensions.

The Materials and What They Do

I am not going to go into what acoustic treatment is, how a room behaves, nodes, low end, blah blah. There are countless resources on the net that cover all manner of theory and application.
Nope, this tutorial is simply a DIY project for those that want to improve their home studio environment. I will, however, list the materials and state what their use is for. All foam materials came from RPG. These guys make top quality foam. Of course, there are many others and some are quite cheap, particularly US manufacturers. But I like RPG stuff, top quality, and design.

RPG material:

  • 4 Sheets of Melamine Blue Foam
  • 2 Boxes of Melamine Procorner
  • 2 Corner Blocks (300 x 300 x 300)

These will be used for acoustic absorption and diffusion tasks. These are highlighted as dark and light grey in the diagram.

We also bought:

  • 10 Tubes of Silicone Sealant to use for ‘gluing’ the foam onto the ceiling etc.
  • 2 x 2 ply boards (use preferably 6-9 mm in thickness), and 2 vented MDF boards.
  • 1 Can of Flooring Adhesive Spray.
  • 3 packs of chicken wire.
  • 4 pack of Hedex Extra Strength

Now the nasty stuff:

Rockwool :

45kg/M^3. These come in packs of 1 metre x 600cm x 10cm, and 2 in a pack. We bought 12 packs.

The general consensus is to use the standard 60kg/M^3, but we needed the double layer packs and decided to use the 45kg/M^3. You can use either, depending on your requirements.
The Rockwool will be used for all the Bass Traps, the Super Chunk corner traps and for further absorption and Limp Mass panels for the facing wall of the room.


Materials required for building each panel, and there are 8 panels to build.

  • 2 x 2400 x 19 x 100 (typically 98) planed softwood battens
  • 2 x 2400 primed white MDF battens
  • 4 Triangular corner brackets
  • 4 Right angle corner brackets
  • 16 12mm wood screws, of size to match brackets
  • 3600 x 600 of Chicken wire.
  • 100+ Heavy Duty Staples.
  • 600 x 50 x 19 batten
  • 4000 x 1000 wrapping material. Beige raw cotton is what I used.
  • Garden Gloves
  • Wood Glue
  • Plant sprayer loaded with 10:1 water- PVA mix with 1 drop of washing up liquid.

This is used to cover a film of PVA to all the Rockwool, both for health reasons and for fixing in place. We use the PVA solution to bond the external fibers. Rockwool is nasty, fiberglass infested, airborne material that breaks and frays easily. The PVA keeps stray strands in place. BTW, if you touch Rockwool with bare skin, then expect to be itching for a few years. If you accidentally touch the damn thing then immediately wash your skin with cold water and refrain from any sexual activities until the itching has gone away. Trust me on this. I couldn’t wear boxer shorts for a week. That is, of course, another story for another day.

These panels are made to house the Rockwool for the bass traps, broadband absorption panels, and limp mass panel.

Tools required:

  • Electric Drill with Pilot hole bit, and Tank Cutter Bit (hole cutter)
  • Electric Screwdriver
  • MitreSaw
  • Heavy-Duty Staple Gun
  • Jigsaw
  • Corner clamps if building single-handed
  • Workbench

The Procedure – Let it Begin

To successfully design, build and integrate acoustic treatment, you need to PLAN. The diagram is the blueprint for how things should look, but it doesn’t always work out that way, as the more reference testing you do the more adjustments are needed. For this project, the blueprint is actually quite simple. Building the bastards is another story entirely. But let’s kick off with what is actually being made.

The Bass Trap panels will adorn every corner and will be connected at an angle to cover as much surface area as possible and with air gaps left behind for better trapping qualities. The broadband absorbers will also be built as panels and symmetrically placed away from all the walls (maintaining an air gap) and in between the bass traps. They will also face each other on opposing walls. The Super Chunks will be placed from floor to ceiling in the two main corners directly adjacent to the monitoring positions. The Foam will be placed on the ceiling, both suspended and fixed, in the surround area of the bay area and the corners where the ceiling meets the bay edge. The layout will be symmetrical for the purposes of maintaining balance and not triggering any bias.

Building The Damn Panels

Step 1

Take the two 2400 x 19 x 100 lengths of timber and mark up at 1800/600 (from either end)

Set the Mitre Saw to Cut at 45 degrees through the timber and, at right angles across the timber…Cut the timbers, so you have 2 pieces of 1800 in length, and two of 600. This is done so that the battens meet perfectly at the corners and edges so that they are flush and look funky.

Mitre Saw Placed on Workbench 1800 length batten being cut at 45 degrees
2 x 1800 and 2 x 600 length battens Lining up 2 battens for further cutting
After lining up the battens, screw 2 screws right through the two battens so as to keep them aligned and rigid. Then start to drill the holes in to create an empty groove. Space between the holes is subjective and down to you. On the longer 1800 battens, we cut 6 holes. The holes are created so we can then saw them to create an empty groove.
Max is sawing the battens between the holes to create the grooves The panels with the grooves cut out.

The problem with most of these vocal removal plug-ins is the anomalies created through the process. There are a number of reasons why this process cannot be accomplished truthfully and completely.

However, before I go into a diatribe of why this method is not too great, it might actually help if I described the process.

1. Open the stereo file in an audio editor like Sound Forge, Wavelab etc.

2. Select and highlight one channel of the two channels (usually the right channel).

3. Select ‘Flip or Invert’ from the edit menu.

4. Now ‘Sum’ the file to Mono.

This process is also called the Karaoke Effect.

Basically, what is happening here is that you are inverting the polarity of one channel, also known as phase reversal, and then mixing it with the other channel and then mixing the two down to a single summed channel, and anything which is identical in both channels will cancel out, and this is why the vocals disappear.

Generally, in most recordings, the lead vocal, the kick drum and bass will invariably be recorded in the centre and these will disappear through the process above.

The problem with this method is that you cannot remove panned sounds, particularly the backing vocals that are panned across the stereo field, only the center field passes will be removed so you might actually be left with sections of the recorded material. Additionally, you need to remember that effects are often panned across the stereo field, particularly when dealing with vocals; reverbs come to mind. Sadly, the process and inherent algorithms could have more of a destructive effect than a very useful one and even more importantly: to remove vocals from a final mastered stereo mix would still not work properly as the effects and dynamics used to create the mix have their own transients. If you were to use a hardware mixer, with a global reverb running on the master stereo buss, and then you muted the source signals, you would still hear the wet signal that is the reverb signal, i.e. the processed signal. In a final mix, the algorithms would have to take into account all efx and dynamics used to extrapolate the vocal frequencies. It cannot be done without a destructive outcome to the other frequencies in the mix. The whole essence of efx is to give the illusion of space and spread. This colouration would also have to be accommodated for when coding as would any dynamics used. So your coding would now have to accommodate almost every type of process available plus recognise artifacts created during the process, along with the coding for the above.

However, ignore my whinging and try the process for yourself, but please bear in mind that the process of ‘keeping vocals’ and removing all else cannot happen because of the stereo panning of the other sources. For this to work, all the sound sources would need to be identical in both channels, with the vocals panned to one side.

Plugin used:

Sound Forge

Relevant content:

Total and Partial Phase cancellation

Using Phase Cancellation in Sound Design

Slicing, or chopping, samples is a process that is so common nowadays, and in most genres, that it has become recognised as a genuine engineering process. It is so accepted and widely used that most software manufacturers incorporate this function in their software designs.

The most common slicing software are Recycle, Phatmatik Pro, Guru, most DAWs and even audio editors allow for slicing of samples.

Let us examine what slicing actually is.

Simply put, slicing (chopping) is a process whereby a piece of audio recording is taken and cut into shorter segments. This then allows the user to use these segments (slices) to create their own arrangements in their compositions. The beauty of this process is one of versatility and flexibility. Additionally, you can drop Rex (the Recycle format) files into your audio sequencer and match them to any tempo without having to time-stretch etc.

Slicing was originally conceived to allow users to slice drum loops into smaller sound components (kicks, hi-hats, snare etc) and to then edit and rearrange the slices to create a new pattern from the original pattern. In fact, sampler manufacturers like Emu used their own generic function called the ‘Beat Munger’, which effectively sliced any sample and afforded the user to treat the slices like any other sample. Akai (MPC Chop Shop), Roland etc all have these functions incorporated into their hardware samplers/workstations nowadays, so it has become a tool that is almost mandatory to provide.

Software manufacturers were also on the scene at a very early stage and Propellerheads created possibly the most popular and used slicing software called Recycle. This became so popular that many manufacturers now allow for their software to import the Recycle format called REX. Rex files are simply slices with midi data attached to them. In effect, you can load the midi file that was used to trigger the slices into a pattern format, whilst simultaneously loading the slices.


Recycle works in much the same way as audio editors when it comes to detecting and creating hitpoints. Hitpoints are merely markers that the user places in any part of the audio sample. In Cubase, when we wanted to extract a groove template, hitpoints were created by a process that searched and detected peak values. You do not have to rely solely on the software finding peak values and assigning hitpoints to them. You can manually input hitpoints anywhere you want. However, it is always beneficial to allow the software to search and mark the hitpoints. This saves time, particularly when dealing with a large audio sample with lots of peak values. You can then add or remove hitpoints to your heart’s content. Almost all of these software (and some hardware) will allow you to control the detail of the search and marking of these hitpoints using a function called ‘Sensitivity’. Sensitivity allows the software to search for lower peak values. In fact, it can detect and mark just about every single peak value, if that is what you want. The basic rule of thumb is; the higher the sensitivity value, the more slices you end up with.

But always bear in mind that your own audio editor will have slicing tools, and even sequencing packages with audio editing features will provide you with slicing tools. So, always check your software (or hardware) to make sure you have this function so as to save you having to spend money buying a dedicated slicing program. Personally, I use Recycle even though I have many other slicing software and hardware.

The following is a video tutorial showing how to use Propellerhead Recycle software:

Chopping/Slicing Beats Using Recycle


Quantisation is the process of aligning a set of musical notes to conform to a grid. When you want to quantise a certain group of MIDI notes in a song, the program moves each note to the closest point on the grid. Invariably, the quantise value determines where on the grid the notes are moved to.

Swing: Allows you to offset every second position in the grid, creating a swing or shuffle feel. Swing is actually a great quantise weapon. It is most commonly used by the Hip Hop fraternity to compensate for the lack of a ‘shuffle’ feel to the beat. The amount of swing applied to the quantise is determined in percentages. The more swing, the higher the percentage applied.

It is important to remember that the slower the tempo of your track, the more syncopated the music will sound if low value quantise is used. This has caused problems for many songwriters and they usually compensate by using higher quantise values or working in double time (ie using a tempo of 140bpm for a song that is meant to be in 70bpm). Working in double time is the equivalent of using half the quantise value. For example, a song in 70bpm written in 140bpm can use a quantise value of 16, which would equate to using a quantise value of 32 when using the original 70bpm (beats per minute) tempo.

 The swing function allows for a more ‘offset’ feel when quantising and makes the music sound more human as opposed to robotic. In fact, swing is such a potent tool that the Dance heads are now using it to give a little life to the hi-hat fills etc.
Grid and type:

Grid allows you to pick a note length (for example: 1/4, 1/8, and so on) to use for the resolution, while Type sets a modifier for the note length: Straight, Triplet or Dotted.  I will not go into this as you would need to understand about note lengths etc, but what I will say is that the triplet is extremely handy when programming drums and particularly hi-hat patterns that require fast moving fills.

Random Quantise:

Another feature that can be useful to make your performances sound more in time without being completely mechanical is Random Quantise. Here you specify a value in ticks (120ths of sixteenth notes) so that when a note is quantised to the nearest beat specified by the other parameters in the quantise template, it is offset by a random amount from zero to the value specified by the Random Quantise setting. Basically, this takes away the rigidity of syncopated rhythms, particularly when dealing with hi-hats. It allows for a ‘random’ element to be used, much akin to a drummer’s human timing.

Most software will come with many additional tools to refine the quantise function and its settings. Humanise, iterative, freeze etc all go to giving the user more detailed editing power. For the sake of this e-Book, I am keeping it simple and only using the functions that most will adopt.

Preparation and Process

Last month we touched on the digital process.

This month we are going to talk about the preparation, the signal path, dos and don’ts and what some of the terminologies mean.

The most important part of the sampling process is preparation. If you prepare properly, then the whole sampling experience is more enjoyable and will yield you the optimum results.
Throughout this tutorial, I will try to incorporate as many sampler technologies as possible, and also present this tutorial side by side, using both hardware and software samplers.

So let us start with the signal path. Signal, being the audio you are recording and path, being the route it takes from the source to the destination.

The signal path is the path that the audio takes from it’s source, be it a turntable, a synthesizer etc, to its final destination, the computer or the hardware sampler. Nothing is more important than this path and the signal itself. The following list is a list of guidelines. Although it is a general guide, it is not scripture. We all know that the fun of sampling is actually in the breaking of the so-called rules and coming up with innovative ways and results. However, the guide is important as it gives you an idea of what can cause a sample to be less than satisfactory when recorded. I will list some pointers and go into more detail about each pointer.

  • The more devices you have in the signal path, the more the sample is degraded and coloured. The more devices in the path, the more noise is introduced into the path, and the headroom is compromised depending on what devices are in the path.
  • You must strive to obtain the best possible S/N (signal to noise ratio), throughout the signal path, maintaining a hot and clean signal.
  • You must decide whether to sample in mono or stereo.
  • You must decide what bit depth and sample rate you want to sample at.
  • You need to understand the limitations of both the source and destination.
  • You need to understand how to set up your sampler (destination) or sound card (destination) to obtain the best results.
  • You need to understand what it is that you are sampling (source) and how to prepare the source for the best sampling result.
  • If you have to introduce another device into the path, say a compressor, then you must understand what effect this device will have on the signal you are sampling.
  • You must understand what is the best way to connect the source and destination together, what cables are needed and why.
  • You need to calibrate the source and destination, and any devices in the path, to obtain the same gain readout throughout the path.
  • You need to understand the tools you have in the destination.
  • Use headphones for clarity of detail.

Basically, the whole process of sampling is about getting the audio from the source to the destination, keeping the audio signal strong and clean, and being able to listen to the audio in detail so you can pick out any noise or other artifacts in the signal.

In most cases, you can record directly from the source to the destination without having to use another device in the path. Some soundcards have preamps built into their inputs, along with line inputs, so that you can directly connect to these from the source. Hardware samplers usually have line inputs, so you would need a dedicated preamp to use with your microphone, to get your signal into the sampler. The same is true for turntables. Most turntables need an amp to boost the signal. In this instance, you simply use the output from the amp into your sampler or soundcard (assuming the soundcard has no preamp input). Synthesizers can be directly connected, via their outputs, to the inputs of the hardware sampler, or the line inputs of the soundcard.

As pointed out above, try to minimise the use of another device in the path. The reason is quite simple. Most hardware devices have an element of noise, particularly those that have built-in amps or power supplies. Introducing these in the signal path adds noise to the signal. So, the fewer devices in the path, the less noise you have. There are, as always, exceptions to the rule. For some of my products, I have re-sampled my samples through some of my vintage compressors. And I have done it for exactly the reasons I just gave as to why you must try to not do this. Confused? Don’t be. I am using the character of the compressors to add to the sample character. If noise is part of the compressor’s character, then I will record that as well. That way, people who want that particular sound, influenced by the compressor, will get exactly that. I have, however, come across people who sample with a compressor in the path just so they can have as strong and pumping signal as possible. This is not advised. You should sample the audio with as much dynamic range as possible. You need to keep the signal hot, ie as strong and as loud as possible without clipping the soundcard’s input meters or distorting in the case of hardware samplers. Generally, I always sample at a level 2 dBu below the maximum input level of the sampler or soundcard, ie 2 dBu below 0. This allows for enough headroom should I choose to then apply dynamics to the sample, as in compression etc. Part 1 of these tutorials explains dynamic range and dBs, so I expect you to know this. I am a vicious tutor, aren’t I? He, he.

My set up is quite simple and one that most sampling enthusiasts use.

I have all my sources routed through to a decent quality mixer, then to the sampler or my computer’s soundcard. This gives me great routing control, many ways to sample and, most important of all, I can control the signal better with a mixer. The huge bonus of using a mixer in the path and as the heart of the sampling path is that I can apply equalisation (eq) to the same source sample and record multi takes of the same sample, but with different eq settings. This way, by using the same sample, I get masses of variety. The other advantage of using is a mixer is that you can insert an effect or dynamic into the path and have more control over the signal, than just plugging the source into an effect unit or a compressor.

Headphones are a must when sampling. If you use your monitors (speakers) for referencing, when you are sampling, then a great deal of the frequencies get absorbed into the environment. So, it is always hard to hear the lower noise or higher noise frequencies, as they get absorbed by the environment. Using headphones, either on the soundcard, or the sampler, you only hear the signal and not the environment’s representation of the signal. This makes finding noise or other artifacts much easier.

The decision of sampling in mono or stereo is governed by a number of factors, the primary one being that of memory. All hardware samplers have memory restrictions, the amount of memory being governed by the make and model of the sampler. Computer sampling is another story entirely, as you are only restricted by how much ram you have in your computer. A general rule of thumb is: one minute of 44.1 kHz (audio bandwidth of 20 kHz using Nyquist theorem, which I covered in Part 1) sample rate audio, in stereo, equates to about 10 megabytes of memory. Sampling the same sampling rate audio in mono gives you double the time, ie 2 minutes, or takes up 5 megabytes of memory.

So, depending on your sampler’s memory restriction, always bear that in mind. Another factor that governs the use of mono over stereo is, whether you actually need to sample that particular sound in stereo. The only time you sample in stereo is if there is an added sonic advantage in sampling in stereo, particularly if a sound sounds fuller and has varying sonic qualities, that are on the left and right sides, of the stereo field, and you need to capture both sides of the stereo field. When using microphones on certain sounds, like strings, it is often best to sample in stereo. You might be using 3 or 4 microphones to record the strings, but then route these through your mixer’s stereo outputs or subgroups to your sampler or soundcard. In this case, stereo sampling will capture the whole tonal and dynamic range of the strings. For those that are on stringent memory samplers, sample in mono and, if you can tolerate it, a lower sampling rate. But make sure that the audio is not compromised.

At this point, it is important to always look at what it is that you are sampling and whether you are using microphones or direct sampling, using the outputs of a device to the inputs of the sampler or soundcard. For sounds like drum hits, or any sound that is short and not based on any key or pitch, like instrument or synthesizer sounds, keep it simple and clean. But what happens when you want to sample a sound from a particular synthesizer? This is where the sampler needs to be set up properly, and where the synthesizer has to be set up to deliver the best possible signal, that is not only clean and strong but one that can be easily looped and placed on a key and then spanned. In this case, where we are trying to sample and create a whole instrument, we need to look at multi-sampling and looping.

But before we do that, we need to understand the nature of what we are sampling and the tonal qualities of the sound we are sampling. Invariably, most synthesizer sounds will have a huge amount of dynamics programmed into the sound. Modulation, panning, oscillator detunes etc are all in the sound that you are trying to sample. In the case of analog synthesizers, it becomes even harder to sample a sound, as there is so much movement and tonal variances, that it makes sampling a nightmare. So, what do we do? Well, we strip away all these dynamics so that we are left with the original sound, uncoloured through programming. In the case of analog synthesizers, we will often sample each and every oscillator and filter. By doing this, we make the sampling process a lot easier and accurate. Remember that we can always program the final sampled instrument to sound like the original instrument. By taking away all the dynamics, we are left with simpler constant waveforms, that are easier to sample and, more importantly, easier to loop.

The other consideration is one of pitch/frequency. To sample one note is okay, but to then try to create a 5 octave preset presentation of this one sample would be a nightmare, even after looping the sample perfectly. There comes a point that a looped sample will begin to fall out of pitch and result in a terrible sound, full of artifacts and out of key frequencies. For each octave, the frequency is doubled. A way around this problem is multi-sampling. This means we sample more than one note of the sound, usually each third or fifth semitone. By sampling a collection of these notes, we can then have a much better chance of recreating the original sound accurately. We then place these samples in their respective ‘slots’ in the instrument patch of the sampler or software sampler, so a C3 note sampled, would be put into a C3 slot on the instrument keyboard layout. Remember, we do not need to sample each and every note, just a few, that way we can span the samples, ie we can use a C3 sample and know that it can still be accurate from a few semitones down to a few semitones up, so we spread that one sample down a few semitones and up a few semitones. These spread or zones are called keygroups. Emu call these zones and Akai call them keygroups. Where the sample ends, we put our next sample and so on, until the keyboard layout is complete with all the samples, this saves us a lot of hard work, in that we don’t have to sample every single note, but also gives us a more accurate representation of the sound being sampled. However, multi-sampling takes up memory. It is a compromise between memory and accurate representation that you need to decide on.

There are further advantages to multi-sampling, but we will come to those later. For sounds that are more detailed or complex in their characteristics, the more samples are required. In the case of a piano, it is not uncommon to sample every second or third semitone and also to sample the same notes with varying velocities, so we can emulate the playing velocities of the piano. We will sample hard, mid and soft velocities of the same note and then layer these and apply all sorts of dynamic tools to try to capture the original character of the piano being played. As I said, we will come to this later.

An area that is crucial is that of calibrating. You want to make sure that the sound you are trying to sample has the same level, as shown on the mixer’s meters, as the sampler’s meters or the soundcard’s meters. If there is a mixer in the path, then you can easily use the gain trims on the mixer, where the source is connected to, to match the level of the sound you want to sample, to the readout of the input meters of the sampler or the soundcard. If there is no mixer in the path, then you need to have your source sound at maximum, assuming there is no distortion or clipping, and your sampler’s or soundcard’s input gain at just below 0dBu. This is a good hot signal. If you had it the other way around, whereby the sound source level was too low and you had to raise the gain input of the sampler or soundcard, you would then be raising the noise floor. This would result in a signal with noise.

The right cabling is also crucial. If your sampler line inputs are balanced, then use balanced cables, don’t use phono cables with jack converters. Try to keep a reasonable distance between the source and destination and if you have an environment with RF interference, caused by amps, radios, antennae etc, then use shielded cables. I am not saying use expensive brands, just use cables correctly matched.

Finally, we are left with the tools that you have in your sampler and software sampler.

In the virtual domain, you have far more choice, in terms of audio processing and editing tools, and they are far cheaper than their hardware counterparts. So, sampling into your computer will afford you many more audio editing tools and options. In the hardware sampler, the tools are predefined.

In the next section, we will look at some of the most common tools used in sampling.

Additional content:

Preparing and Optimising Audio for Mixing

Normalisation – What it is and how to use it

Topping and Tailing Ripped Beats – Truncating and Normalising

Also known as MB or MBC.

These divide the incoming audio signal into multiple bands, with each band being compressed independently from the other.

The beauty of this is that with full band compressors the whole signal is treated, so when a peak is detected, the whole signal is compressed and so other frequencies are also subjected to compression.

Multiband compression only compresses the frequency bands chosen, so a more fluid and less abrupt result is gained. Instead of having one peak trigger the compressor into compressing the entire signal, the multiband allows for individual bands to be compressed. On some compressors, you even have the option of selecting bands that will not undergo any treatment. In essence, a multi-band compressor comprises of a set of filters that splits the audio signal into two or more frequency bands. After passing through the filters, each frequency band is fed into its own compressor, after which the signals are recombined at the output.

The main advantage of multi-band compression is that a loud event in one frequency band won’t trigger gain reduction in the other bands.

Another feature of the multiband compressor is that you are offered crossover points. This is crucial, as you are given control over where to place the frequency band. Setting these crossover points is the heart of the compressor and crucial in processing the right frequency spectrum with the right settings. For example: if you are treating the vocals in the mid-range but put your low-end crossover too far into the middle range, then the low-end compression settings will also affect the mid-range vocals.

Multiband compression can either be a friend or enemy. It all comes down to how you use it and when. It can be a great compressor for controlling problematic frequencies, or for boosting certain ranges in isolation to others. I tend to use them to rescue poor stereo mixes and with the aid of new features like crossover frequencies and threshold and ratios for each band, I can have more accurate processing.

However, use with care.

Relevant content:

Multiband Compression – what is it and how do you use it

Compression Masterclass

This subject has done the rounds for years.

Often it is the cash strapped home studio owner who has to resort to using headphones, the cheaper and space saving solution, instead of speakers to conduct mixing projects. There are obvious advantages to using headphones for mixing, but glaring disadvantages too. There are no winners here on either side of the fence. Quite simply, if you want to be fully armed to conduct the best mixes, then a combination of both is essential.

Good quality headphones can reveal detail that some good speakers/monitors omit. In terms of sound design, a good headphone is imperative as it will be unforgiving in revealing anomalies. In terms of maintaining a clean and noise free signal path, it is crucial. On the flip side, stereo imaging and panning information is much harder to judge on headphones. Determining the spatial feel of a mix is almost impossible to convey on headphones, but simple with speakers. Pans are pronounced and extreme on headphones and do not translate across well when used with speakers. Even EQ can come across as subdued or extreme.

I find that if I mix on headphones alone, then the mix never travels well when auditioned with monitors. The reverse is also true.

When using monitors and because the monitors are placed in front of us our natural hearing perceives the soundstage as directly in front of us. With headphones, because the ‘speakers’ are on either side of us, there’s no real front-to-back information. Headphones also provide a very high degree of separation between the left and right channels, which produces an artificially detailed stereo image. Our brains and ears receive and analyse/process sound completely differently when using headphones as opposed to monitors. When using headphones, each ear will only hear the audio signal carried on the relevant channel, but with speakers, both ears will hear the signals produced by both loudspeakers.

You also need to factor in the fact that different people perceive different amounts of bass – factors such as the distance between the headphone diaphragm and the listener’s ear will change the level of bass. The way in which the headphone cushion seals around the ear also play a part, which is why pushing the phones closer to your ears produces a noticeable increase in bass. This increases the bass energy and this alone negates the idea of having correct tonal balance in the mix being auditioned.

With monitors, both ears hear both the left and right channels.

If your room is acoustically problematic and you have poor monitors, then headphones may well be a better and more reliable approach. But it is a lot harder to achieve the same kind of quality and transferability that comes more naturally on good monitors in a good acoustically treated room.

I find that if I record and check all my signals with headphones, then I am in a strong position to hear any anomalies and be in a better position to judge clarity and integrity of the recorded signals. This, coupled with speaker monitoring, assures me of the best of both worlds; clarity and integrity married with spatial imaging.

If you want further reading on this subject then I recommend Martin Walker’s seminal article here entitled: Mixing On Headphones.

Briefly explained:

A filter allows you to remove unwanted frequencies and also allows you to boost certain frequencies. Which frequencies are removed and which frequencies are left depends on the type of filter you use.

Before we can list the different types of filters and what they do, there are a few terms and definitions we need to cover. These are crucial and are used all the time so it is important that you know what these terms are and what they mean.

Cut-off frequency This is the point (frequency) at which the filter begins to filter (block or cut out). The filter will lower the volume of the frequencies above or below the cut-off frequency depending on the type of filter used. This ‘lowering of the volume of the frequencies,’ is called Attenuation. In the case of a low pass filter, the frequencies above the cut off are attenuated. In the case of a high pass filter, the frequencies below the cut off are attenuated. Put simply: in the case of a low pass filter, we are trying to block the (higher) frequencies above a certain point and allow the lower frequencies through. In the case of a high pass filter, the opposite is true. We try to cut out or block frequencies below a certain point and allow the higher frequencies through. On analogue synthesizers, this cut-off was called the slope or gradient. The actual terminology was more accurately described as the RC(resistor/capacitor).

Analogues use circuitry and for that reason alone, it takes time for the filter to attenuate frequencies, in proportion to the distance from the cut-off point. Today’s technology allows for instant cut-off as the filter attenuation is determined by algorithms as opposed to circuits. That is why the filters of an Arp or Oscar etc, are so much more expressive and warm as they rely completely on the resistors and capacitors to, first warm-up, then to work but in a gradual mode(gradual meaning sloped or curved as opposed to instant). Depending on how well a filter attenuates or the way it attenuates gives us an idea of the type of sound we will achieve with an analogue filter. You often hear someone say ‘That Roland is warm man’ or ‘Man, is that Arp punchy’. These are statements that explain how Roland’s filters sound or how potent the Arp’s filters are. So, the speed at which the filter attenuates is called the slope or gradient.

Another point to raise now is that you will often see values on the filter knobs on analogue synthesizers that have 12dB or 24dB per octave. That basically means that each time the frequency doubles, the filter attenuates by 12dB or 24dB everything at that frequency. These are also known as 2 pole or 4 pole filters each pole represents 6dB of attenuation. This is how analogue circuits were built, the number of circuits being used by the filter to perform the task at hand.

If you delve into the filters that Emu provide on their synthesis engines, then it could go into pages, if I had to list them all. But for now, I am keeping it simple and listing the standard filter types and what they do.

Low Pass-LPF

As mentioned earlier, this filter attenuates the frequencies above the cut-off point and lets the frequencies below the cut-off point through. In other words, it allows the lower frequencies through and blocks the higher frequencies, below and above the cut-off (the frequency at which the filter begins to kick in). The low pass filter is one mutha of a filter. If you use it on a bass sound, it can give it more bottom and deep tones. If used on a pad sound, you can have the filter open and close or just sweep it and it gives that nice closing and opening effect. You can also use this filter cleverly by removing higher frequency sounds or noise that you don’t want in your sound or mix. Because it blocks out higher frequencies at the cut off you set, then it’s a great tool if you want to remove hiss from a noisy sample or, if you use it gently, you can remove tape or cassette hiss.

High Pass-HPF

This is the opposite of the low pass filter. This filter removes the frequencies below the cut-off and allows the frequencies above the cut-off through. Great for pad sounds, gives them some top end and generally brightens the sound. It’s also really good on vocals as it can give the vocals more brightness and you can also use it on any recordings that have a low-frequency hum or sound that is dirtying the sound, although, in this instance, it would be a limited tool, as you could also cut out the lower frequencies in the sound itself, but still a tool that has many uses.

Band Pass-BPF

This is a great filter. It attenuates frequencies below and above the cut-off and leaves the frequencies at the cut-off. It is, in effect, a low pass and a high pass together. The cool thing about this filter is that you can eliminate the lower and higher frequencies and be left with a band of frequencies that you can then use as either an effect, as in having that real mid-range type of old radio sound or use it for isolating a narrow band of frequencies in recordings that have too much low and high end. Sure, it’s now really made for that but the whole point of synthesis is to use tools because that’s what they are, tools. Breaking rules is what real synthesis is all about. Try this filter on synthesizer sounds and you will come up with some wacky sounds. It really is a useful filter and if you can run more than one at a time, and select different cut-offs for each one, then you will get even more interesting results.

Interestingly enough, bandpass filtering is used on formant filters that you find on so many softsynths, plugins, synthesizers and samplers. Emu are known for some of their format filters and the technology is based around bandpass filters. It is also good for thinning out sounds and can be used on percussive sounds as well as creating effects type of sounds. I often get emails from programmers wanting to know how they can get that old radio effect or telephone line chat effect or even NASA space dialogue from space to Houston. Well, this is one of the tools. Use it and experiment. You will enjoy this one.

Band Reject Filter-BRF-also known as Notch

This is the exact opposite of the bandpass filter. It allows frequencies below and above the cut-off and attenuates the frequencies around the cut-off point. Why is this good? Well, it eliminates a narrow band of frequencies, the frequencies around the cut-off, so, that in itself is a great tool. You can use this on all sounds and can have a distinct effect on a sound, not only in terms of eliminating the frequencies that you want to be eliminated, but also in terms of creating a new flavour to a sound. But its real potency is in eliminating frequencies you don’t want. Because you select the cut-off point, in essence, you are selecting the frequencies around that cut-off point and eliminating them. An invaluable tool when you want to hone in on a band of frequencies located, for example, right in the middle of a sound or recording. I sometimes use a notch filter on drum sounds that have a muddy or heavy midsection, or on sounds that have a little noise or frequency clash in the midsection of a sound.


The comb filter is quite a special filter. It derives its name from the fact that it has a number of notches at certain distances (delays), so it looks like a comb. The Comb filter differs from the other filter types, because it doesn’t actually attenuate any part of the signal, but instead adds a delayed version of the input signal to the output, basically a very short delay that can be controlled in length and feedback. These delays are so short that you only hear the effect rather than the delays themselves. The delay length is determined by the cut-off. The feedback depth is controlled by the resonance.
This filter is used to create a number of different types of effects, chorus and flange being two of the regulars. But the comb filter is more than that. It can be used to create some incredible dynamic textures to an existing sound. When we talk of combs, we have to mention the Waldorf synthesizers. They have some of the best comb filters and the sounds they affect are so distinct, great for that funky metallic effect or sizzling bright textures.


This is also called the swept eq. This filter controls three parameters, frequency, bandwidth and gain. You select the range of frequencies you want to boost or cut, you select the width of that range and use the gain to boost or cut the frequencies, within the selected bandwidth, by a selected amount. The frequencies not in the bandwidth are not altered. If you widen the bandwidth to the limit of the upper and lower frequencies ranges then this is called shelving. Most parametric filters have shelving parameters. Parametric filters are great for more complex filtering jobs and can be used to create real dynamic effects because they can attenuate or boost any range of frequencies.

Well, I hope this has helped to demystify the confusing world of filters for you and I suggest that you ignore the filters on your synthesizers, be they hardware or software, at your own peril because they are truly powerful sound design functions. But if you want a whole book dedicated to equalisation and filtering then I suggest you have a look at EQ Uncovered – (second edition) This book has received excellent reviews and well worth exploring.

If you prefer the visual approach try this video tutorial:

Filters and Filtering – what are filters and how do they work