The Impact Of Effects

How To Use EQ, Compression, And Reverb With Drums


Musicians have always searched for fresh and exciting sounds. Back in the 1920s, trumpeters would put a derby hat over the bell and move it in and out to create a wah-wah effect. A few years later, Don Leslie’s rotating speaker added a shimmering stereo chorus to liven up the rather stark tones of the Hammond organ. Guitarists in the 1960s sometimes sliced their speaker cones with razor blades to produce grinding, buzzing distortion.

In the age of computers and plug-in software, adding spice to your tracks is easier than ever before, thanks to the rich palette of effects found in even a basic recorder. Effects have become absolutely essential in today’s recordings. If you’re producing your own tracks, you can’t afford not to know how to use effects.

In this article we’ll look at the three effects that are most often applied to drum sounds: equalization (EQ), compression, and reverb. They can be used to produce big, obvious changes in the sound, but often the best way to use effects is to massage the sound in such a subtle way that your listeners think they’re hearing something perfectly natural. We’ll look at what the effects are, how to send signals to them, and how to use them for musical purposes. The effects we’ll discuss are available in hardware units, but because computer-based productions are so pervasive, we’ll focus on effects that arrive in the studio as software.

Platforms & Plug-ins

Chances are you’ll be recording your drums into a multitrack program called a digital audio workstation (DAW). Most DAWs come with a good set of basic effects. These may supply everything you’ll need.

At the point when you want to go further, however, you can buy third-party effects software in the form of plug-ins. Plug-ins can provide higher quality sound than the DAW’s included effects, or more exotic types of sounds. When the plug-in is correctly installed in your computer, it will show up in the DAW’s menus just like the DAW’s own effects. Some software synthesizers do double duty as effects. If you’re on a budget, don’t overlook this option.

Plug-ins come in various formats. Before buying this type of software, check to make sure it comes in a format that’s compatible with your DAW. On Windows computers, VST and DirectX formats are commonly used; in the Mac, AU and VST are the main choices. Pro Tools users need the RTAS format, which is available for both Mac and Windows, or the TDM format, which runs on high-end Pro Tools hardware systems.

Many plug-ins have an installer that will let you choose any of the above formats (except TDM; TDM effects tend to be expensive, and are sold separately). So the choice of plug-in formats is not a huge issue — it’s just something to keep an eye on. A few effects, especially those that you can download for free, may be Mac-only or Windows-only.

All of the effects discussed in this article can be used in real time. That is, the track will play back through the effect. You’ll be able to adjust the parameters of the effect and hear your changes instantly as part of the mix. Some DAWs also offer non-real-time effects. These operate by editing the data in the track, which is contained in an audio file on your hard drive. Before using a non-real-time effect, be sure to create a backup copy of the original, unedited audio file, in case you need it later.


Fig. 1 An audio channel in Steinberg Cubase 4 has a big equalizer section (center) with four parametric EQ bands. A stack of up to eight insert effects can be set up in the left column — here I’m about to add a compressor. The aux sends from this channel are in the right column. The ModMachine is a Cubase delay line. In this window I’ve set the send level to the ModMachine at -19.17dB by dragging the light blue line just below the send. This line is actually a fader. The green lines in the EQ bands are also faders, and the EQ curve can be edited graphically by using the mouse to drag the dots in the graph.


Inserts & Sends

Once you’ve decided to use a real-time effect on a track, you need to “instantiate” it. (“Instantiate” is computer jargon for “create an instance of.”) The DAW owner’s manual will tell you how to do this; most DAWs let you select the effect from a menu (Fig. 1).

There are two basic ways to use effects: as inserts and as aux sends. An insert effect is applied to a single mixer channel. The word “insert” means that the effect is inserted directly in the mixer channel’s signal path, so that the sound passing through that channel passes through the effect. If you add several insert effects to a single channel, the signal will pass through one and then the next in a linear way (this is called a “series” routing).

Effects almost always have wet/dry controls. The dry signal is the signal entering the effect, and the wet signal is the sound that emerges from the effect itself. By adjusting the wet/dry mix, you can control how the insert effect is balanced with the dry signal.

An aux send effect is typically used to process the signals from several channels at once. Each channel will have a few sends — outputs that route the signal to an auxiliary (aux) bus. The effect is inserted on the aux bus, and processes whatever signals are sent to that bus from various channel sends. Each send is in the form of a level control, so you can send each sound to the aux effect at a low level or a high level.

By sending a signal from one channel to several aux busses, you create a parallel routing, in which the same dry signal is processed by several effects. The outputs of effects in aux busses are usually set to 100-percent wet because the output of the original mixer channel (the one whose sends you’re using) provides the dry signal.

Most often, you’ll use insert effects for tone-altering processes like EQ and distortion, and for dynamics-altering processes like compression. There’s generally no reason to apply these to several channels at once. Send effects are more commonly used for reverb and delay lines. For example, you may want to send several instruments through the same reverb to create the impression that they’re all playing in the same concert hall.

Why not just insert an identical reverb on each channel? Because that would make editing more difficult — if you needed a bigger reverb sound on the mix, you might have to edit the reverbs on as many as a dozen channels. But the main reason is because reverb in particular can use a significant amount of your computer’s CPU power. You’ll be able to play more tracks and use more software instruments and effects if you use only a single reverb where it’s practical to do so.

Effects can also be inserted on the master output bus of the mixer. The effects used most often on the master bus are EQ, which shapes the overall frequency contour of the mix, and compression or limiting, to ensure that the mix is as loud as possible without overloading.


Fig. 2 The graphic equalizer in Image-Line FL Studio 7 has 31 narrow frequency bands. The cut/boost is individually adjustable for each band. The frequencies of the bands are shown in the row below the band sliders. This particular EQ can morph among up to eight different presets, so you can do complex frequency-based modulation.


Equalizers are used to boost or cut the levels of signals within specific frequency ranges. If a snare doesn’t have enough snap, for instance, you can use an equalizer to boost the highs or high mids (see the Frequency Ranges sidebar). It will boost these same frequencies for everything else in that mixer channel. One reason for using multiple mikes on a kit is so you can EQ the most important drums individually.

A typical EQ effect has several bands. Each band may be locked into a separate frequency range. For instance, you may have a three-band EQ whose bands are dedicated to the lows, the mids, and the highs. Or you may have a multi-band EQ in which you’re free to choose the frequency range and bandwidth for each band (as in Fig. 1). This allows you to shape the sound with more precision. This type of EQ is called “parametric,” because it has three parameters for each band.

Each band of an equalizer has its own cut/boost control. You may also find a frequency control, which governs the center frequency of the band (the area where the cut/boost has the most power), and a bandwidth control.

A “graphic” EQ (Fig. 2) has more bands than a parametric EQ, but the only parameter you get to adjust is the cut/boost for each band. The bank of cut/boost sliders in a graphic EQ gives a rough idea of the frequency response curve of the effect; it’s called a “graphic” EQ because the sliders resemble a graph.

A filter is like an equalizer in that it’s designed to boost and/or cut the partials in certain frequency ranges. But the design is different. Filters often emulate the designs found in synthesizers. (See the section on Filters in my “Synth Basics” article in the March 2007 issue of DRUM!) A filter effect will usually have a built-in envelope follower. This tracks the loudness of the incoming signal (its natural amplitude envelope). The output of the envelope follower is used to raise or lower the

filter’s cutoff frequency. Filter effects with envelope followers can be useful with drum sounds because of the sound’s sharp attack and quick decay, but you may get the best results if one drum is well isolated from the rest of the kit.


Fig. 3 The compressor in Ableton Live 6 has a threshold slider (left). Lowering the threshold will cause the signal to be compressed more. Ditto for the Ratio control, which tells the compressor how much you want to squash the signal when it exceeds the threshold. The attack and release knobs control how quickly the compressor cuts in when the signal pops up above the threshold, and how quickly it lets go when the signal drops back. Make-up gain is controlled by the Out slider (right). The G.R. indicator is strictly a meter, not an adjustable parameter. It shows how much gain reduction is being applied from moment to moment.

Compressors & Limiters

Compressors and limiters control the dynamic level of a track. They do this in an interactive way by lowering the level of the loudest peaks without touching anything else. When the loudest peaks are tamed, the level of the whole track can be raised (using the comp/limiter’s “make-up gain” control). The track then sounds louder than it did before, even though its loudest peaks are the same.

A limiter has a hard “ceiling.” The output signal is never allowed to get any higher than the ceiling, no matter how loud the input is. A compressor operates in a more gentle way: As the input gets louder, the output also gets louder, but the increase of the output isn’t as steep as the increase of the input.

Limiters are useful with digital audio because it’s important to keep the overall level of a recording from clipping. Clipping distortion, which is nasty, happens when the signal is too loud for the digital system to handle.

Compressors and limiters have a “threshold” parameter (Fig. 3). This is the point in the loudness curve at which the comp/limiter starts to do its job. Signals that are quieter than the threshold level pass through the effect without being altered in any way. But when a signal rises above the threshold, the comp/limiter starts to work. When the threshold is set fairly high, most of the track will pass through, with only the loudest peaks being affected. When the threshold is low, the comp/limiter will be “riding” the output most of the time. With a low threshold, the make-up gain control will bring up the noise floor of the recording. This can be useful with more in-your-face types of music, as it can make a sterile drum booth sound a bit trashy.

Compressors typically have three more controls: ratio, attack, and release. The ratio knob sets the amount of compression. When it’s set to a ratio of 2:1, for instance, as the input signal rises 2dB above the threshold, the output signal will rise only 1dB above. Gentle compression might use a ratio of only 1.4:1, while a ratio of 10:1 or more will squash the peaks ruthlessly.

The attack knob determines how fast the compressor cuts in when the signal rises past the threshold. Attack is usually measured in milliseconds (ms). When the attack is less than 1ms, the signal will be compressed very quickly. A longer attack will allow part of the attack transient of a sound to pass through the compressor before the compression kicks in. This can be useful with drum sounds, because it allows the snap of the stick to be heard more clearly.

The release knob controls how fast the compressor “lets go” when the signal falls back below the threshold. Typical release settings are in the 10—100ms range. A release that’s too quick can cause the compressor’s output to pump or flutter in an unnatural way, while a release that’s too slow can squash a note that follows a peak. For best results, use your ears.

Some engineers put compression or limiting on the output of the DAW’s mixer. This allows them to mix the whole song louder. If you have time, you may find that you get more musical results by compressing and/or adjusting the levels of certain tracks manually.

A multiband compressor splits the incoming signal into a number of separate frequency bands. Each band is compressed separately. Multiband compressors are often used to balance the frequency range of an entire mix.


Fig. 4 Wizooverb 2 (available from M-Audio) is a convolution reverb. The miniature browser on the right side of the central display area is used for loading impulse files. An impulse is a recording made in an actual acoustic space (typically by firing a starter pistol and recording the echoes). To create reverberation, the impulse file is “convolved” with the input signal from your track using a mathematical process. The other parameters of Wizooverb 2 (Wet/Dry mix, Predelay, and so on) are fairly standard.



The purpose of reverb (reverberation) is to make studio recordings sound as if they were done in live acoustic spaces, such as a recital hall, gymnasium, or tiled bathroom. In the glory days of analog recording, the “reverb” was often a cement-lined room in the basement of the recording studio, with a speaker at one end to fill the room with sound and a microphone at the other end to pick up the echoes. Today’s digital reverbs are quite a bit more convenient, and when adjusted carefully they can sound very good indeed.

Digital reverbs come in two basic flavors: standard and convolution. Convolution reverbs (Fig. 4) are typically more expensive and put more demands on your computer’s CPU because more mathematical computation is required. For many musical purposes, a standard reverb will work just as well.

When a sound is played in an actual room, it bounces off of the walls and other nearby surfaces. The first few bounces can sometimes be heard as separate echoes, especially if the room is large and the walls are concrete. These separate echoes are created in a digital reverb as “early reflections.” You’ll probably be able to control the level of the early reflections as a group. The later reflections blend together in a continuous wash of sound, which is called the decay or the tail.

Most digital reverbs let you set the decay time, which is the amount of time it takes the tail to die out. You may also be able to control the density or diffusion of the tail. A low density or diffusion setting will cause the reverb to sound rather artificial and metallic, which may be exactly what you want. Higher density and diffusion settings sound more natural. You may also find a “room size” parameter that’s separate from the diffusion (Fig. 5).

Sound travels at roughly one foot per millisecond. In an acoustic space, if your drums are 50 feet from the nearest wall, you won’t hear any reverb at all for 100 milliseconds (1/10 second). To simulate larger spaces, reverbs have a parameter called “predelay.” Increasing the predelay gives a more cavernous effect, and reducing it makes the simulated room seem smaller.


Fig. 5 The reverb included with Ableton Live 6 has some basic controls (Predelay, Size, Decay Time, Density, Diffusion, and Dry/Wet balance) and also a few that are less common. The Spin section adds animation to the early reflections, while the Chorus warms up the sound.

Export Final Mix

This has been a short-and-sweet overview of effects for recording novices. Keep an eye on future installments of Plugged In. I’ll be covering a few other types of effects in shorter articles.


Frequency Range

Frequency is measured in Hertz (Hz). The range of human hearing is generally capable of anything from 20Hz to 20,000Hz. Large Hertz values are usually stated in kiloHertz (KHz). One Hertz equals one cycle per second. All sounds in nature contain partials (overtones) at various frequencies. If a sound has partials at a given frequency, an equalizer can cut or boost them without affecting the rest of the signal. But if the sound has no partials in a certain frequency range, adjusting that range with the equalizer will do nothing. The amount of boost or cut is defined in decibels (dB). Decibels are usually referenced to a 0dB level; 0dB means “no change,” not “no sound.” If we cut a certain frequency range by 6dB, we say that it’s set to -6dB. As a rule of thumb, 1dB is about the smallest amount of change that an average listener can perceive. When talking about EQ, producers and engineers often use somewhat vague terms rather than referring to frequency ranges by the numbers. The highs in a track are generally the frequencies above 8kHz — except that, if it’s a bass track, the “highs” might be as low as 2kHz. These terms are relative. The high mids (high midrange frequencies) are in the 2kHz—6kHz range, the low mids in the 800Hz—2kHz range, and the lows are whatever is going on at the bottom. Terms such as “nasal,” “boomy,” and “hollow” are often used to describe EQ contours. Their meanings are fairly obvious. As you spend time listening to tracks that have been EQ’ed, or need to be, you’ll become more sensitive to what various frequency ranges sound like.