The Sounds Around
This may seem like a step back from the big picture mix mentality, but this is actually part of it. I want a big picture, but within any big picture are things that probably need definition and clarity and Focus. After all, I’m not typically trying to paint solid colors. A mix can be like a great photo; some things are in sharp focus while others are not, but regardless of focus everything in the shot contributes to the picture.
So here’s something a lot of guys miss because they get hung up in this macro-fix-it-all-in-solo mode: The way things sound in the mix is often influenced by the other things in the mix. The sounds around the thing(s) you’re trying to hear actually contribute to the way that thing sounds. This is especially apparent when things overlap in frequency content.
For example, let’s say your vocals sound muddy and/or harsh. The vocals might not actually be muddy. The way the vocals and the keyboards and/or the bass and/or the guitar(s) are interacting might actually be revealing itself as the vocals sounding muddy. Similarly, a harsh vocal might not actually be a harsh vocal. Harsh vocals might actually result from a harsh sounding instrument like a guitar.
Sometimes the answer to making one thing sound right is to change something else. And sometimes making something sound amazing means making everything else less than amazing.
If you go back to The Most Important Thing article I wrote earlier this year, you know I often prefer to start my mixes in a non-traditional way with the stuff I consider most important. I do this because then I can ensure it sounds the way I want it before I add the other stuff in.
Let’s say I’ve got my drums and vocals up sounding the way I want them. If that vocal loses intelligibility or gets murky or ugly when I bring in the guitars, that’s not a problem with the vocal. Remember, I had my vocal the way I wanted it. My problem is the guitars. So what do you think I should adjust to get my vocal back? The guitars or the vocal?
This brings up an important point on the frequency spectrum in mixing. I think some engineers make a little too liberal use of panning to move things off of each other and avoid dealing with this. However, the ability to do this in live sound is dependent on the coverage of the PA. We don’t usually get a free pass which makes managing the frequency spectrum extremely important in live sound.
So if this is a new concept for you or something you struggle with, here’s a little drill you can do using a DAW or your console and an EQ; if you use a console this will be easiest with a digital console that can store EQ presets.
Grab an EQ with a high pass filter and a low pass filter; you will need both filters, and they can’t have any limits on range. Try and stay away from gentle filters if possible. You don’t need steep slopes, but you should probably use at least a 12 or 18 dB per octave filter. I typically use a 2-band version of the Waves Q10 plugin when I’m demonstrating this.
Set the frequencies of your two filters so they are one octave apart. For example, set the HPF to 20 Hz and the LPF to 40 Hz. Store this as a preset and name it something like Octave 1. Continue on through the rest of the frequency spectrum creating presets for every octave: 20-40, 40-80, 80-160, 160-320, and so forth until you get up to 20 kHz.
Once you’ve completed this, take some of your favorite recordings and load them into your DAW or run them through the console. Insert your EQ on the recordings track/channel and start listening. As you’re listening, jump through each of your Octave presets and concentrate on what you hear. Make note of the most prominent instruments you hear within each octave; you can make mental notes, but you might do better to actually write your answers down. Make sure to use multiple recordings because the idea with this kind of drill is to listen for trends. What you’ll probably find are certain instruments tend to land more predominately in certain octaves.
So what do you do with this information?
Next time you’re mixing and running into problems with an instrument, go back to your notes and find the octave where that instrument figured most prominently. Then look at the rest of the instruments in your mix that might have frequency content in that octave. Take those other inputs and place a wide-ish filter them centered on the that octave and start cutting. Voila, you’re on the path to giving your instrument its own home to live within the mix. With practice and experimentation, you can learn to center those cuts more precisely on overlapping things and even start to figure out how to help multiple things live in the same octave.
I’ll warn you, though, if you start solo’ing things back up after you start doing this you might be startled at what you’re doing to things. But remember, mixing isn’t about individual things. Mixing is about the sum of the parts. See, we’re still all about that Big Picture music thing.