Monitor Level Calibration: Engineering Secrets

Have you ever worked on mixes for hours only to be surprised when you play them in your car or on a friend’s speakers? Have you ever turned a mix up as loud as you can but it still doesn’t compete with professional CDs? Have you ever listened to your songs only to be annoyed that they’re all different volumes? Does it take you more than four hours to mix a song? If you answered yes to one or more of these questions, you’re not alone. There is a simple explanation for the problems you are experiencing: It has to do with your monitor calibration.

What I am about to reveal is a secret technique that separates the professionals from the hobbyists. It involves setting up your playback system in a specific manner. This practice is not often mentioned on Internet forums and conference panels. Why? Because this way of working is second nature to many engineers, especially those of use who grew up on consoles and analog tape machines. But for those raised on digital audio workstations (DAW), this is a fact-of-life that no one has bothered to discuss.

The reason mix engineers must calibrate their playback systems stems from the nature of human hearing. The following will explain this phenomenon, examine how contemporary monitoring got so messed up, provide some steps that you can follow to fix your setup, and delineate the advantages of this approach. Other than the investment of your time (and some of this stuff may require a double-take to absorb), this solution relies on free or low-cost technology to implement at a basic level.

Not All Sounds Are Created Equal (At Least To Humans)

Studies have shown that human hearing is not linear. We do not perceive different frequencies at equal loudness. Midrange signals in the 2—4kHz range are most noticeable while the perceived loudness of other frequencies is less (now you know why the telephone sounds the way it does – those upper mid-range frequencies are most discernible to humans). Scientists reading that statement automatically grasp its meaning and implications. But if you’re scratching your head wondering “what?” it probably means you’re normal! It’s easier to explain with examples.

When listening to music at soft levels, humans are not as sensitive to bass frequencies as much as high frequencies. (Ever notice those bass-boost buttons on music players or loudness buttons on old-school stereo receivers? Those circuits raise the bass levels for low-volume listening. Now you know why they were added to so many playback devices). Conversely, at louder levels, we are more sensitive to bass frequencies and less attuned to the high end. Back in 1933 researchers Harvey Fletcher and Wilden A. Munson conducted hearing tests on humans. They graphed their results to show sensitivity to frequency by level of playback. Their names got attached to the curves they published. To this day audio engineers refer to the Fletcher-Munson curve when discussing this phenomenon. Technically, the correct term is “equal-loudness contour,” but the core idea is the same.

Okay, you’re probably thinking, but what does this have to do with my recording studio? As it turns out, there is a loudness “sweet spot” where most professional mix engineers work. When listening in this range, engineers are able to make decisions regarding the levels of various mix elements that hold up across many playback environments and at varied levels. As an added bonus, this “sweet spot” happens to be lower than the fatigue threshold, allowing you to work long days without fear of long-term hearing damage. (Fig. 1, below.)

calibration

Fig. 1 This graph depicts the SPL required for humans to perceive a given frequency signal. For example, the lowest red curve shows a 20Hz bass signal would have to reach an SPL of 60 for a human to hear it. But a 10kHz signal would only need to reach an SPL of 10 for perception.

Where Did We Go Wrong?

Computer-based audio workstations have given millions access to recording technology that was previously kept in the hands of few. But that doesn’t mean everyone who buys a DAW automatically knows how to use it. As Steven Slate (Maker of Trigger and Steven Slate Drums) notes, “As recording applications become more accessible and more and more people start to record audio, I’d like to make sure that they are aware of how to use their tools in a way that will ensure that they can make music in a professional way.”

When people come to me with these problems, invariably the setup is similar: The person purchases a digital interface, connects their monitor speakers to the main out (L & R) on the back of the box, and uses the master fader in the DAW to control their playback volume. A variation is the person turns the volume knob on the interface down too low, or simply picks a level that sounds loud enough depending on their mood. Working this way has two major pitfalls. First, there is no standardized playback calibration, meaning all the benefits of mix translation and gain staging are forfeited. But perhaps worse, with no standard level, the engineer will resort to turning up faders to make the song loud based upon the arbitrary volume setting. This usually distorts the DAW; hides clicks, pops, and bad cross-fades; and provides a false sense of how much compression is happening. Remember: the faders exist to blend the audio relative to one another, and to provide the software with guidance regarding the summing of the overall signal. They were never designed to serve as a playback level for your monitor speakers. Working this way is like hammering a screw into a wall – yes, the hammer will get the job done, but it’s the wrong tool, the results won’t hold up, and you’ll probably have to start over if you want to do it right.

Page 1 of 3