Stupid Mixing “Tricks”
Everyone’s always looking for mixing tricks. The problem is most of the tricks are really just techniques that lose their trick flair once you start employing them on a regular basis. I’ve been searching for trick techniques for years, and I think I’ve reached a point where I’m not finding much that’s new anymore.
Most of the tricks I can think of off the top of my head typically involve one of four things: stacked processing, parallel processing, phase/delay, or harmonics. There’s other stuff like sample replacement and triggering things, but I don’t consider those to be tricks as much since they often involve adding things. Maybe it’s just semantics, but they don’t qualify in my mind.
Anyway, I’m going to do a little overview of these things, but I’m going to try not to get too specific in the hope that I might spark a bit of creativity in you guys to try something new with these techniques that you can then share with the class in the comments section below, and we can all steal.
First up is what I call stacked processing or just stacking. I hesitate to even mention stacking, but since I know people who roll their eyes at the idea, I’ll call this a trick.
Stacking is when you process a signal with the same type of process multiple times. One way of doing this might be to run a signal through a limiter and then into a compressor so you’re stacking compressors. Sometimes it might be running something into a delay and then into a reverb which, by the way, is really just a bunch of delays.
So why do this?
Sometimes it’s to generate a certain character or sound. Sometimes it’s to hide/disguise the processing.
For example, lots of little bits of compression added up over time can be much more transparent than a big chunk of compression. If we look at traditional recording methods of popular music, an instrument might get compressed a little while being recorded. Then it would get recorded to analog tape which would introduce varying degrees of tape compression. Next there might be a bit of compression added to the instrument during mixing. Then maybe some more compression on the overall mix which gets printed to analog tape for more, you guessed it, compression. Then maybe the mastering engineer does some compression or limiting on it. So that’s, what, up to 5 or 6 or more different stages of compression? Today’s digital gear doesn’t compress things automatically for us so some guys like to emulate this in a sense.
This leads into parallel processing. Parallel processing is basically when we split or mult a signal into multiple copies/versions, process each new version of that signal a little differently, and then add them together. The most popular form of this is probably parallel compression sometimes also referred to as NY style compression, although it might have really started back in the Motown. Other takes on parallel processing involve spatial and time effects(reverbs, delays, etc.) and even EQ.
To be honest, I’ve never done parallel EQ. In my head it just doesn’t make much sense since EQ is essentially a straight amplitude thing. There’d be a little more to it, but adding a signal with a little EQ boost, in my mind, is really just going to add less of that boost overall when it adds with a clean version of the signal. Still, I do know guys who swear by it, and I’ve also heard demos of the UBK Clariphonic that really intrigued me. So what do I know?
So why do this?
Again, it’s typically to hide the processing or create a certain character/vibe thing. I like to parallel compress drums because, to me, it gives them more impact and punch. My friend Andrew Stone layers effects in a way that is a form of parallel processing; he’s even got a video up about it.
A word of caution with parallel processing and digital gear: each bit of digital processing can add a bit of delay or latency to the processed signal. When that latent processed signal(s) recombines with the clean signal, there can be varying degrees of phase issues which may or may not be a good thing.
This is typically used to affect a sound spatially, and this comes right out of parallel processing.
For example, some guys like to take an input and split it to multiple inputs. They’ll pan them in opposite directions and delay or flip the polarity on one side. There’s a popular trick with the Eventide Harmonizer that you can do a search on the web for that is another take on this.
So why do this?
Contrary to what many engineers think, the PAN knob is not the best way to move things in space because a PAN knob is simply a volume control that adjusts the level feeding each speaker. The average width of a human head is six inches. Do you really think there’s that much of a volume difference when the microphones built into our heads are only six inches apart?
In my opinion, the primary way our brains locate things is from the differing arrival times of a sound at each of our ears. Sure there’s more to it like spectral content AND volume, but in my experience time has the biggest effect. So by playing with the timing of a sound in different speakers, we can play mind tricks on our brains that add space, depth, and/or width. Hmmm, maybe this one really is a trick.
The polarity button on the console works sometimes because our brains are easily fooled and can perceive a signal out of polarity as a different time arrival. Why is this? Look at a pure sine wave. The second half of the wave’s cycle looks just like the first half of the wave flipped out of polarity, but our brains don’t always know that.
Whether you’re confused now or not, I’m moving on.
This is pretty straight forward. A lot of guys add harmonics to sounds. In the old days our equipment often did this for us whether we wanted it to or not, but now we use things like saturation and distortion boxes and plugins with our clean cutting edge digital equipment….well, even when the gear used to do this for us, some of us still added more distortion and saturation.
So why do this?
Well, because it sounds like home to a lot of us.
Plus, adding harmonics can help things cut in the mix. It can make things sound richer and add depth to sounds. There are plenty of articles floating around where engineers talk about distorting just about everything at one point or another, but popular instruments for this are the snare drum and bass guitar and even sometimes vocals.
My unscientific take on this whole thing is the added harmonics give our ears more to latch on to. Maybe it also has something to do with the Missing Fundamental.
It also just feels cool to add harmonics to things, and I don’t mean that it sounds cool. It’s just a cool thing to do. Seriously, if you do it, it might not sound good, but you will feel cool doing it. And who doesn’t want to feel cool?
Sometimes harmonics are added via distortion or saturation which is just a family-friendly word for distortion; some church cultures probably only use the word saturation. The console emulators that are popular right now are really just adding very subtle forms of distortion, and these emulators are then typically applied liberally within a mix. I did a series on console emulators not too long ago that you can check out here.
Sometimes there might also be a harmonic generator such as the old dbx 120 subharmonic synthesizer. The Aphex Aural Exciter also falls into this arena. Waves Renaissance Bass and MaxxBass also part of this family.
I hesitate to call this a trick, but it’s a great technique so here’s a fifth one I didn’t mention above: side-chaining. Side-chaining involves the use of a compressor. The compressor is typically inserted on a channel and then it is side-chained to another channel. What this does is cause the compressor to react or key off the side-chained channel instead of the channel the compressor is inserted on.
So why do this?
To force an input to respond to a different signal. Popular ways of using this are to place a compressor on a bass guitar and side-chain it to the kick drum so that the bass pumps in time with the kick. This is pretty popular with EDM style music right now, although the hyper-pumping disorients me.
Another popular use is for de-essing. Let’s say you have a vocal you need to de-ess. Mult it to a couple of inputs on your console where one input is your main vocal and the other will be your key. Insert a compressor on your main channel and side-chain it to your key channel. Now take that key channel and boost the snot out of the top end with an EQ and cut everything else.
What you are doing is making the “esses” even more apparent on your key channel so whatever you do, don’t feed the key channel to the mix bus. With the “esses” accentuated in the key channel, your side-chained compressor is much more sensitive to “esses” and can be used to attenuate them. To the best of my knowledge, Bob Clearmountain still uses a variation on this method.
So those are the big tricks in my mind. Hopefully I’ve glossed over or missed some things. So how do you like to use some of these tricks/techniques and what are some of your other favorite things to manipulate audio that I forgot to mention? Let us know in the comments.