I don’t remember exactly where the question came up on this one. It might have been in the comments of an older post or on Twitter or somewhere on Facebook. The basic question, though, is how do you make things sound wider when you’re essentially mixing in mono for live sound. I think this is a great question.
First, a quick recap for those just jumping in. Personally, I view live sound as a mostly mono-ish world because in most situations I work in these days the coverage of the loudspeaker system really only provides an audience with one direct channel. The photo here is a good example of this. If you were seated where this photo was taken, you’re really only hearing one of those arrays.
There are good reasons why many of us have given up on trying to put stereo in every seat. Some of those reasons are probably in an article or two on this blog, and some of them might be subject matter for another day. The point is, for most intents and purposes the live sound world is mono-ish.
Robert Scovill once described what he does as “Wide Mono”, and that’s a good way to explain what I try and achieve. First and foremost, I want all of the primary musical information in every seat. However, I also want to try and get my mixes to feel enveloping which takes a bit of width and depth to achieve. If you found a recent FOH mix of mine from the last couple years floating around the interwebs and listened only to the left and then only to the right, you’d probably find that the musical information in each channel is essentially the same. However, if you get both left and right going and sit between the speakers you’re going to get added width and depth.
One of the issues I have with purely mono systems is they usually sound very one dimensional to me. There’s no depth or life. It’s like a clock radio speaker with more fidelity–or not depending on the rig. So, personally, I think having a multi-channel system even in a room where most seats lack coverage from all channels feels better than a mono system spread throughout the venue. The far side of the room may not get direct sound from the opposite loudspeakers, but they can feel the effects of something different exciting the room.
So back to the question at hand. How do we make things wide without panning?
Well, let’s revisit why we might not pan much. Panning is simply a level control between output paths. Panning something to the left really just turns the signal up in the left channel and down in the right. It’s definitely a nice tool to have, but while a level difference is one way we localize sounds in the real world, it’s not the primary way.
So, without further adieu, here is the first of a few strategies I like to employ to try and make this sound wider and deeper in live sound.
Strategy 1: Delay
The primary way in which our brains localize sound is from differing arrival times of that sound at each of our ears. Level and tone also play a part, but delay is the primary factor. Google “Head Related Transfer Function” or HRTF if you want to learn more about this. Personally, I’m no expert on HRTF, but I learned just enough in college to use it for fun and profit.
Since delay is the primary way we localize sound, it’s also one of the primary ways I like to make things seem wider than they are. Here’s my extremely simple approach.
– Take an input and duplicate it to a second input.
– Pan one input hard left and the other hard right.
– Delay the side opposite of where you want the sound to come from.
I do this all the time with guitars. For example, if a guitarist is standing house right, I’ll split his guitar to two channels and delay the left side.
The trick is the amount of delay you use has to be within the Haas zone. This basically means you probably don’t want to go more than 20 milliseconds. If the delay amount is too long and outside of the Haas zone, we start to hear the delayed side as an echo or slap. But if we keep the delay short enough, thanks to the Haas effect, our brain perceives that delayed signal as a reflection. This in turn makes our brain localize the sound to the first place we heard it come from while also believing that the sound is in some sort of space.
The beauty of this approach is that instrument is in every output channel of our mix. So if someone is only hearing one version of the instrument–delayed or un-delayed–their ear perceives it as the main thing.
So with this technique we get localization(win), a sense of space(win), AND our musical information in every seat(win) when we do this.
What’s the catch? Well, there are a couple.
The first catch is we’re playing with the timing of a performance when we delay it. If there’s too much delay and our listeners are hearing that delayed side as their primary version of the instrument, it can pull the player apart from the rest of the band. So care must be taken to not delay things too much. How much is too much? You’ll have to decide for yourself.
The second catch is folding our mix to mono can get messy because when we combine two versions of the same signal that are out of time with each other we’re setting the stage for textbook destructive interference aka comb-filtering. Remember: phase problems are really just time problems, and folding our mix to mono with this approach running can create a BIG time problem.
I’ll say a couple things on this, though:
1.) More delay between our two signals can help lessen the awkward effects of comb-filtering.
2.) Some of my friends give me flack about my obsession with prime numbers and delays, but maybe, just maybe, there might be a reason many legendary engineers also like using prime numbers for delays.
Now go experiment. I’ll be back with another approach soon.