In and Out of Context
Working on instruments and sounds in “solo” is a controversial topic. If you look around at what other engineers say, a lot of guys caution against the use of the solo button. I’ve probably even raised some flags at doing this. The ability to “solo” an input, however, can be a powerful tool, and there’s a reason why I use it on nearly every mix I do. So let’s unpack this a bit.
Solo and the ability to “solo-in-place” a sound is my friend when I am trying to get what I refer to as Mixable Things. I have a document I’m working on that explains this a little more in-depth, but Mixable Things are basically where I want my sounds to be before I start mixing. It’s really just about having a good source to begin, but that doesn’t mean I’m going to spend endless amounts of time trying to make everything sound great because I don’t need that. So I’ll solo things when I’m checking sounds and doing basic cleanup using subtractive EQ.
For example, it has been my experience that most live vocals need a bit of work before they are good enough to put into a mix. They often tend to be muddy to start, and there may also be some strange resonances happening in the room. Cleaning any junk up in the lows and lower-mids with a bit of subtractive EQ is a lot easier for me to do when I can concentrate on only the vocal so this is one example of when I utilize solo-in-place.
Once the inputs are all in good shape, though, solo’ing things isn’t as useful for me. This is because we’re at the actual mixing part now, and the challenge here is in getting everything to “fit” and play together in the mix. When you are trying to get things to fit together, I believe this is best achieved while listening to things in context. In other words, we need to hear how our sounds play with other sounds when we’re doing stuff to know whether we’re getting anywhere. Otherwise we’re really just blindly stabbing in the dark.
Here’s an example. I struggled for a long time to get the bass guitar to cut in a mix, and I see a lot of other engineers struggle with the same issue. When you solo the bass, it is often in pretty good shape if you have a decent player with a decent instrument. But that doesn’t mean it will work in the mix. The fundamentals of the bass are down how, however, to get the bass to read and speak in a mix requires a fair amount of mid-range information to go along with those nice lows and sub-lows. Instead of trying to figure this out in solo, though, it is often better to put the bass up where it feels right for the lows and then give the mids a kick to bring the instrument out in the mix. Sometimes they need a BIG kick so just listen and turn the knob until it works with everything else in the mix.
How about another couple of examples. When I’m trying to shape the transient of something like a snare drum using a compressor, I’ll often work on that in solo. The same often goes for getting the Attack and Release on a compressor for most instruments as these are just easier for me to hear this way. If I’m trying to compress something like a vocal to help it sit with other instruments, though, I’ll often work on that or at least refine things in context because the other instrumentation will influence my ideal settings.
But I’ve watched some great engineers seem to get everything to work by treating each input in solo.
So this is where the plot thickens a bit. When you’ve been doing this engineering thing for a while, sometimes you can get a pretty good idea of how things are all going to fit together simply based on listening to things on their own and being familiar with the song or style of music. I know that after I treat my vocals and drums on their own, I’m probably not going to do much to them once they’re in the mix. Other engineers may have different instruments they can do the same with, and I know some guys who can get pretty much everything to work. The problem with this, though, is very few engineers start with these abilities, and it’s best to mix with things in context until you have your chops to do this. We always have to remember that our listeners don’t experience everything all by itself so how things work together should get more of our attention.
So here are some tips if you’re struggling to work on things in context. When things are bugging you, identify the first thing that bugs you as specifically as possible. Is it a tonal issue? Are you chasing the volume of something? What exactly isn’t working for you? Don’t be afraid to hunt for the issue.
It’s not unusual for me to pop mutes on and off here and there to either clear a few things away or to search for what’s causing the issue I hear. When I find the problem input, I’ll turn some knobs in context to try and get happier with things. If I’m in the middle of a service or event, I’ll hold a headphone up to one ear and pop cue things up one at a time to find the trouble, but then I’ll once again turn my knobs listening to everything.
I think something that throws engineers off about working on things “in context” is they think they need to have the entire mix up, but this isn’t the case. You don’t need every fader up. You can start mixing “in context” as soon as you add something to the mix while you’re building it and continue that process as you bring additional elements in. You can also follow that line of thinking backwards if that helps, too. The key is just to avoid working on things on their own at this point.
I hope that helps a bit. If you have any strategies of your own for working on things in context, I’d love to hear about them in the comments.