Tag Archive for: filter

Briefly explained:

A filter allows you to remove unwanted frequencies and also allows you to boost certain frequencies. Which frequencies are removed and which frequencies are left depends on the type of filter you use.

Before we can list the different types of filters and what they do, there are a few terms and definitions we need to cover. These are crucial and are used all the time so it is important that you know what these terms are and what they mean.

Cut-off frequency This is the point (frequency) at which the filter begins to filter (block or cut out). The filter will lower the volume of the frequencies above or below the cut-off frequency depending on the type of filter used. This ‘lowering of the volume of the frequencies,’ is called Attenuation. In the case of a low pass filter, the frequencies above the cut off are attenuated. In the case of a high pass filter, the frequencies below the cut off are attenuated. Put simply: in the case of a low pass filter, we are trying to block the (higher) frequencies above a certain point and allow the lower frequencies through. In the case of a high pass filter, the opposite is true. We try to cut out or block frequencies below a certain point and allow the higher frequencies through. On analogue synthesizers, this cut-off was called the slope or gradient. The actual terminology was more accurately described as the RC(resistor/capacitor).

Analogues use circuitry and for that reason alone, it takes time for the filter to attenuate frequencies, in proportion to the distance from the cut-off point. Today’s technology allows for instant cut-off as the filter attenuation is determined by algorithms as opposed to circuits. That is why the filters of an Arp or Oscar etc, are so much more expressive and warm as they rely completely on the resistors and capacitors to, first warm-up, then to work but in a gradual mode(gradual meaning sloped or curved as opposed to instant). Depending on how well a filter attenuates or the way it attenuates gives us an idea of the type of sound we will achieve with an analogue filter. You often hear someone say ‘That Roland is warm man’ or ‘Man, is that Arp punchy’. These are statements that explain how Roland’s filters sound or how potent the Arp’s filters are. So, the speed at which the filter attenuates is called the slope or gradient.

Another point to raise now is that you will often see values on the filter knobs on analogue synthesizers that have 12dB or 24dB per octave. That basically means that each time the frequency doubles, the filter attenuates by 12dB or 24dB everything at that frequency. These are also known as 2 pole or 4 pole filters each pole represents 6dB of attenuation. This is how analogue circuits were built, the number of circuits being used by the filter to perform the task at hand.

If you delve into the filters that Emu provide on their synthesis engines, then it could go into pages, if I had to list them all. But for now, I am keeping it simple and listing the standard filter types and what they do.

Low Pass-LPF

As mentioned earlier, this filter attenuates the frequencies above the cut-off point and lets the frequencies below the cut-off point through. In other words, it allows the lower frequencies through and blocks the higher frequencies, below and above the cut-off (the frequency at which the filter begins to kick in). The low pass filter is one mutha of a filter. If you use it on a bass sound, it can give it more bottom and deep tones. If used on a pad sound, you can have the filter open and close or just sweep it and it gives that nice closing and opening effect. You can also use this filter cleverly by removing higher frequency sounds or noise that you don’t want in your sound or mix. Because it blocks out higher frequencies at the cut off you set, then it’s a great tool if you want to remove hiss from a noisy sample or, if you use it gently, you can remove tape or cassette hiss.

High Pass-HPF

This is the opposite of the low pass filter. This filter removes the frequencies below the cut-off and allows the frequencies above the cut-off through. Great for pad sounds, gives them some top end and generally brightens the sound. It’s also really good on vocals as it can give the vocals more brightness and you can also use it on any recordings that have a low-frequency hum or sound that is dirtying the sound, although, in this instance, it would be a limited tool, as you could also cut out the lower frequencies in the sound itself, but still a tool that has many uses.

Band Pass-BPF

This is a great filter. It attenuates frequencies below and above the cut-off and leaves the frequencies at the cut-off. It is, in effect, a low pass and a high pass together. The cool thing about this filter is that you can eliminate the lower and higher frequencies and be left with a band of frequencies that you can then use as either an effect, as in having that real mid-range type of old radio sound or use it for isolating a narrow band of frequencies in recordings that have too much low and high end. Sure, it’s now really made for that but the whole point of synthesis is to use tools because that’s what they are, tools. Breaking rules is what real synthesis is all about. Try this filter on synthesizer sounds and you will come up with some wacky sounds. It really is a useful filter and if you can run more than one at a time, and select different cut-offs for each one, then you will get even more interesting results.

Interestingly enough, bandpass filtering is used on formant filters that you find on so many softsynths, plugins, synthesizers and samplers. Emu are known for some of their format filters and the technology is based around bandpass filters. It is also good for thinning out sounds and can be used on percussive sounds as well as creating effects type of sounds. I often get emails from programmers wanting to know how they can get that old radio effect or telephone line chat effect or even NASA space dialogue from space to Houston. Well, this is one of the tools. Use it and experiment. You will enjoy this one.

Band Reject Filter-BRF-also known as Notch

This is the exact opposite of the bandpass filter. It allows frequencies below and above the cut-off and attenuates the frequencies around the cut-off point. Why is this good? Well, it eliminates a narrow band of frequencies, the frequencies around the cut-off, so, that in itself is a great tool. You can use this on all sounds and can have a distinct effect on a sound, not only in terms of eliminating the frequencies that you want to be eliminated, but also in terms of creating a new flavour to a sound. But its real potency is in eliminating frequencies you don’t want. Because you select the cut-off point, in essence, you are selecting the frequencies around that cut-off point and eliminating them. An invaluable tool when you want to hone in on a band of frequencies located, for example, right in the middle of a sound or recording. I sometimes use a notch filter on drum sounds that have a muddy or heavy midsection, or on sounds that have a little noise or frequency clash in the midsection of a sound.

Comb

The comb filter is quite a special filter. It derives its name from the fact that it has a number of notches at certain distances (delays), so it looks like a comb. The Comb filter differs from the other filter types, because it doesn’t actually attenuate any part of the signal, but instead adds a delayed version of the input signal to the output, basically a very short delay that can be controlled in length and feedback. These delays are so short that you only hear the effect rather than the delays themselves. The delay length is determined by the cut-off. The feedback depth is controlled by the resonance.
This filter is used to create a number of different types of effects, chorus and flange being two of the regulars. But the comb filter is more than that. It can be used to create some incredible dynamic textures to an existing sound. When we talk of combs, we have to mention the Waldorf synthesizers. They have some of the best comb filters and the sounds they affect are so distinct, great for that funky metallic effect or sizzling bright textures.

Parametric

This is also called the swept eq. This filter controls three parameters, frequency, bandwidth and gain. You select the range of frequencies you want to boost or cut, you select the width of that range and use the gain to boost or cut the frequencies, within the selected bandwidth, by a selected amount. The frequencies not in the bandwidth are not altered. If you widen the bandwidth to the limit of the upper and lower frequencies ranges then this is called shelving. Most parametric filters have shelving parameters. Parametric filters are great for more complex filtering jobs and can be used to create real dynamic effects because they can attenuate or boost any range of frequencies.

Well, I hope this has helped to demystify the confusing world of filters for you and I suggest that you ignore the filters on your synthesizers, be they hardware or software, at your own peril because they are truly powerful sound design functions. But if you want a whole book dedicated to equalisation and filtering then I suggest you have a look at EQ Uncovered – (second edition) This book has received excellent reviews and well worth exploring.

If you prefer the visual approach try this video tutorial:

Filters and Filtering – what are filters and how do they work

In essence, noise is a randomly changing, chaotic signal, containing an endless number of sine waves of all possible frequencies with different amplitudes. However, randomness will always have specific statistical properties. These will give the noise its specific character or timbre.

If the sine waves’ amplitude is uniform, which means every frequency has the same volume, the noise sounds very bright. This type of noise is called white noise.

White noise is a signal with the property of having constant energy per Hz bandwidth (an amplitude-frequency distribution of 1) and so has a flat frequency response and because of these properties, white noise is well suited to test audio equipment. The human hearing system’s frequency response is not linear but logarithmic. In other words, we judge pitch increases by octaves, not by equal increments of frequency; each successive octave spans twice as many Hertz as the previous one down the scale. And this means that when we listen to white noise, it appears to us to increase in level by 3dB per octave.

If the amplitude of the sine waves decreases with a curve of about -6 dB per octave when their frequencies rise, the noise sounds much warmer. This is called pink noise.

Pink noise contains equal energy per octave (or per 1/3 octave). The amplitude follows the function 1/f, which corresponds to the level falling by 3dB per octave. These attributes lend themselves perfectly for use in acoustic measurements.

If it decreases with a curve of about -12 dB per octave we call it brown noise.

Brown noise, whose name is actually derived from Brownian motion, is similar to pink noise except that the frequency function is 1/(f squared). This produces a 6dB-per-octave attenuation.

Blue noise is essentially the inverse of pink noise, with its amplitude increasing by 3dB per octave (the amplitude is proportional to f).

Violet noise is the inverse of brown noise, with a rising response of 6dB per octave (amplitude is proportional to f squared).

So we have all these funky names for noise, even though you need to understand their characteristics, but what are they used for?

White noise is used in the synthesizing of hi-hats, crashes, cymbals etc, and is even used to test certain generators.

Pink noise is great for synthesizing ocean waves and the warmer type of ethereal pads.

Brown noise is cool for synthesizing thunderous sounds and deep and bursting claps. Of course, they can all be used in varying ways for attaining different textures and results, but the idea is simply for you to get an idea of what they ‘sound’ like.

At the end of the day, it all boils down to maths and physics.

 

Here is an article I wrote for Sound On Sound magazine on how to use Pink noise referencing for mixing.

And here is the link to the video I created on master bus mixing with Pink noise.

And here is another video tutorial on how to use ripped profiles and Pink noise to mix.

AN INTRODUCTION TO DIGITAL AUDIO

In the old days, sampling consisted of recoding the audio onto magnetic tape. The audio, (analogue), was represented by the movement of the magnetic particles on the tape. In fact, a good example is cutting vinyl. This is actually sampling because you are recording the audio onto the actual acetate or disc by forming the grooves. So, the audio is a continuous waveform.

Whether we are using a hardware sampler, like the Akais, Rolands, Yamahas, Emus etc…, or software samplers on our computers, like Kontakt, EXS24, NN-19 etc…, there is a process that takes place between you recording the analogue waveform (audio) into the sampler and the way the sampler interprets the audio and stores it.. This process is the conversion of the analogue signal (the audio you are recording) into a digital signal. For this to happen, we need what we call an analogue to digital converter (ADC) and for the sampler to play back what you have recorded and for you to hear it, the process is reversed but with a slightly different structure and process, and for that to happen we need a digital to analogue converter (DAC). That is simple and makes complete sense. Between all of that, there a few other things happening and with this diagram (fig1) you will at least see what I am talking about.

Fig1

The sampler records and stores the audio as a stream of numbers, binary, 0s and 1s, on and off. As the audio (sound wave) is moving along the ADC records ‘snapshots’ (samples) of the sound wave, much like the frames of a movie.. These snapshots (samples) are then converted into numbers. Each one of these samples (snapshots) is expressed as a number of bits. This process is called quantising and must not be confused with the quantising we have on sequencers although the process is similar. The number of times a sample is taken or measured per second is called the sampling rate. The sampling rate is measured as a frequency and is termed as kHz, k=1000 and Hz= cycles per second. These samples are measured at discrete intervals of time. The length of these intervals is governed by the Nyquist Theory. The theory states that the sampling frequency must be greater than twice the highest frequency of the input signal in order to be able to reconstruct the original perfectly from the sampled version. Another way of explaining this theory is that the maximum frequency that can be recorded with a set sample rate must be half the sample rate. A good example at this point would be the industry standard cd. 44.1 kHz means that the number of times a sample (snapshot) per second is taken equates to 44,100/second.

Ok, now let’s look at Bits. We have talked about the samples (snapshots) and the numbers. We know that these numbers are expressed as a number of bits. The number of bits in that number is crucial. This determines the dynamic range ( the difference between the lowest value of the signal to the highest value of the signal) and most importantly, the signal to noise ratio (S/N). For this, you need to understand how we measure ‘loudness’. The level or loudness of a sound is measured in decibels (dB), this is the unit of measure of the replay strength ( loudness) of an audio signal. Named after this dude Bell. The other measurement you might come across is dBu or dBv, that is the relationship between decibels and voltage. This means that decibels referenced to .775 volt. You don’t even need to think about this but you do need to know that we measure loudness (level) or volume of a sound in decibels, dB. 

Back to bits. The most important aspect of bits is its resolution. Let me explain this in simpler terms. You often come across samplers that are 8 bit, Fairlight CMI or Emulator 11, or 12 bit, Akai S950 or Emu SP1200, or 16 bit, Akai S1000 or Emulator 111 etc..You also come across sound cards that have 16 bit or 24 bit etc…Each bit refers to how accurately a sound can be recorded and presented. The more bits you have (Resolution), the better the representation of the sound. I could go into the’ electrical pressure measurement at an instant’ definition but that won’t help you at this early stage of this tutorial. So, I will give a little simple info about bit resolution.

There is a measurement that you can use, albeit not clear cut but at least it works for our purposes. For every bit, you get 6dBs of accurate representation. So, an 8 bit sampler will give you 48dB of dynamic range. Bearing in mind that we can, on average, hear up to 120dB, that figure of 48dB looks a bit poor. So, we invented 16 bit cd quality which gives us a 96dB dynamic range. Now we have 24 or even 32 bit sound card and samplers (24 bit) which gives us an even higher dynamic range. Even though we will never use that range, as our ears would implode, it is good to have a bit. Why? Well, use the Ferrari analogy. You have 160mph car there and even though you know you are not going to stretch it to that limit (I would), you do know that to get to 60mph it takes very little time and does not stress the car. The same analogy can be applied to monitors (speakers), the more dynamic range you have the better the sound representation at lower levels.

To take this resolution issue a step further: 8 bits allows for 256 different levels of loudness to a sample, 16 bit allows for 65,536. So, now you can see that 16 bits gives a much better representation. The other way of looking at it is: if I gave you 10 colours to paint a painting (copy a Picasso) and then gave you a 1000 colours to paint the same painting, which one would be better in terms of definition, colour, depth etc.? We have the same situation on computer screens and scanners and printers. The higher the resolution the clearer and better defined the images on your computer, or the better the quality of the scanned picture, or better the resolution of the print. Fig2. As you can see from the figure below. The lowest bit resolution is 1 and the highest is 4. The shape of the highest bit resolution is the closest in terms of representing the shape of the audio signal above. So the higher the bit resolution the better the representation. However, remember that because we are dealing with digital processing and not a continuous signal, there will always be steps in our signal in the digital domain.

Fig2

Now let’s look at the signal to noise ratio (S/N). This is the level difference between the signal level and noise floor. The best way to describe this is by using an example that always works for me. Imagine you are singing with just a drummer. You are the signal and the drummer is the noise (ha.ha). The louder you sing or the quieter the drummer plays the greater the signal to noise ratio. This is actually very important in all areas of sound technology and music. It is also very relevant when we talk about bit resolution and dynamic range. Imagine using 24 bits. That would allow a dynamic range of 144 dB. Bearing in mind we have a limit of 120 dB hearing range (theoretical) then the audio signal would be so much greater than the noise floor that it would be almost noiseless.

A good little example is when people re-sample their drums, that were at 16 bit, at 8 bit. The drums become dirty and grungy. This is why the Emu SP1200 is still so highly prized. The drum sampler beatbox that gave us fat and dirty drum sounds. Lovely.

Now, let’s go back to sample rates. I dropped in a nice little theorem by Nyquist to cheer you up. I know, I know, I was a bit cold there but it is a tad relevant.

If the sampling rate is lower or higher than the frequency we are trying to record and does not conform to the Nyquist rule, then we lose some of the cycles due to the quantisation process we mentioned earlier. Whereas this quantisation is related to the input voltage or the analogue waveform, for the sake of simplicity, it is important to bear in mind it’s relationship with bits and bit resolution. Remember that the ADC needs to quantise 256 levels for an 8 bit system. These quantisations are shown as steps, the jagged shape you get on the waveform. This creates noise or alias. The process or cock-up is called aliasing. Check Fig3.

Fig3

To be honest, that is a very scant figure but what it shows is that the analogue to digital conversion, when not following the Nyquist rule, leaves us with added noise or distortion because cycles will be omitted from conversion and the result is a waveform that doesn’t look too much like our original waveform that is being recorded.

To be even more honest, even at high sampling the signal processed will still be in steps as we discussed earlier about quantisation and the way the digital process processes analogue to digital.

So how do we get past this problem of aliasing? Easy. We use anti-aliasing filters. On Fig1, you see that there are 2 filters, one before the ADC and one after the DAC. Without going back into the Nyquist dude’s issues, just accept the fact that we get a great deal of high-frequency content in the way of harmonics or aliasing with the sample rate processing, so we run a low pass filter that only lets in the lower frequencies and gets rid of the higher frequencies (above our hearing range) that came in on the signal. The filter is also anti-aliasing so it smoothes out the signal.

What is obvious is that if we are using lower sampling rates then we will need a filter that is a steeply sloped frequency band (aggressive). So, it makes sense to use higher sampling rates to reduce the steepness of the filter. Most manufacturers put an even higher sample rate at the output stage so the filter does not need to be so aggressive (please refer to upsampling further on in this tutorial). The other process that takes place is a process is called interpolation. This is an error correction circuit that guesses the value of a missing bit by using the data that came before and after the missing bit. A bit crude. The output stage has now been improved with better DACs that are oversampling, and additionally a low order analogue filter just after the DAC at the output stage. The DAC incorporates the use of a low pass filter (anti imaging filter) at the output stage.

Now let’s have a look at an aggressive form of alias called foldover. Using Nyquist again: A sampling rate of 44.1 kHz can reproduce frequencies up to 22.05kHz (half). If lower sampling rates are used that do not conform to the Nyquist rule, then we get more extreme forms of alias. Let us put that in simple terms and let us take a lower sampling rate and for the sake of this argument, let us halve the usual 44.1 kHz. So, we have a sampling rate of 22.05 kHz. We know, using Nyquist, that your sampler or sound card cannot sample frequencies above half of that, 11.025 kHz. Without the use of the filter, that we have already discussed, the sampler or sound card would still try to record those higher frequencies (above 11.025 kHz) and the result would be terrible as the frequencies would now be re-markedly different to the frequencies you were trying to record.

So, to solve this extreme form of alias, manufacturers decided to use a brick wall filter. This is a very severe form of the low pass filter and, as the name suggests, only allows frequencies at a set point through, the rest it completely omits. However, it tries to compensate this aggressive filtering by boosting the tail-end of the frequencies, set by the manufacturer, to allow it to completely remove the higher frequencies.

However, we have now come to a new improved form of DAC called upsampling.

An upsampling digital filter is simply a poor over oversampled digital reconstruction filter having a slow roll-off rate. Nowadays, DAC manufacturers claim that these DACs improve the quality of sound and when used, instead of the brick wall filters, the claim is genuine. Basically, at the DAC stage, the output is oversampled, usually 8 times, this creates higher frequencies than we had at the AC stage, so to compensate and remove these very high frequencies, a low order analogue filter is added after the DAC and just before the output. So we could have an anti-aliasing filter at the input stage and an upsampling DAC with a low order analogue filter at the output stage. This technology is predominantly used in cd players and, of course, sound cards, and any device that incorporates DACs. I really don’t want to get into this topic too much as it really will ruin your day. At any rate, we will come back to this and the above at a later date when we examine digital audio in more detail. All I am trying to achieve in this introduction is to show you the process that takes place to convert an analogue signal into digital information, back to analogue at the output (so we can hear it: Playback) and the components and processes used.

The clock. Digital audio devices have clocks that set the timing of the signals and are a series of pulses that run at the sampling rate. Right now you don’t need to worry too much about this as we will come to this later. Clocks can have a definite impact in the digital domain but are more to do with syncing than the actual digital processes that we are talking about in terms of sampling. They will influence certain aspects of the process but are not relevant in the context of this introduction. So we will tackle the debate on clocks later as it will become more apparent how important the role of a good quality clock is in the digital domain.

Dither

Dither is used when you need to reduce the number of bits. The best example, and one that is commonly used, is when dithering down from 24 bits to 16 bits or 16 bits down to 8 etc… A very basic explanation is we add random noise to the waveform when we dither, to remove noise. We talked about quantisation earlier in this tutorial and when we truncate the bits (lowering the bit resolution), ie, in this case, we cut down the least significant bits, and the fact that we are always left with the stepped like waveforms in the digital process, by adding noise we create a more evenly flowing waveform instead of the stepped like waveform. It sounds crazy, but the noise we add results in the dithered waveform having a lower noise floor. This waveform, with the noise, is then filtered at the output stage, as outlined earlier. I could go into this in a much deeper context using graphs and diagrams and talking about probability density functions(PDF) and resultant square waves and bias of quantisation towards one bit over another. But you don’t need to know that now. What you do need to know is that dither is used when lowering the bit resolution and that this is an algorithmic process, ie using a predetermined set of mathematical formulas.

Jitter

Jitter is the timing variation in the sample rate clock of the digital process. It would be wonderful to believe that a sample rate of 44.1 kHz is an exact science, whereby the process samples at exactly 44,100 cycles per second. Unfortunately, this isn’t always the case. The speed at which this process takes place usually falters and varies and we get the ‘wobbling’ of the clock trying to keep up with the speeds of this process at these frequencies. This is called jitter. Jitter can cause all sorts of problems and it is best explained, for you, as the lower the jitter the better the audio representation. This is sometimes why we use better clocks and slave our sound cards to these clocks, to eradicate or diminish ‘jitter’ and the effects caused by it. I will not go into a deep explanation of this as, again, we will come to it later in these tutorials.

So, to conclude:

For us to sample we need to take an analogue signal (the audio being sampled), filter and convert it into digital information, process it then convert it back into analogue, then filter it and output it.

Relevant content:

Jitter in Digital Systems

Dither – What is it and how does it work?