The Latest Posts
0

No Big Secret

https://farm8.staticflickr.com/7167/6605281489_d62cf50797_m.jpg

Earlier this year I was talking with an engineer friend of mine who got to spend a week learning from one of today’s A-Level studio mixers. It was cool to hear a little about some of the techniques he learned firsthand from an A-Level mixer, but the course of the conversation really just kept coming back around and around and around to a simple thing: There’s no big secret to mixing.

There isn’t some big thing the A-list mixers know and do that you can learn and all of a sudden your mix becomes magically transformed. Ultimately these guys do one thing: make the music do what the music is trying to do.

You can read and watch all the interviews and even try and stalk some of these guys to ask some questions, but I can save you some time and energy. If there’s any one, universal thing I’ve picked up from all of my research, one-on-one question time, and over-analyzing of the A-list guys, it’s that they are crazy in-tune with music.

Lord-Alge, Clearmountain, Brauer, Wallace, Marroquin, Pensado, Shipley, Lillywhite, Puig, etc., etc., etc.

These guys GET music.

That’s it.

When you GET music and are presented with a load of channels/inputs/tracks, what you need to do is intuitive.

Sure there are fundamental techniques you need to master like gain staging, processing, balancing, etc. But once you’ve figured that stuff out–and it’s not so hard if you have the aptitude and practice practice practice–and you GET music, when you push up the faders you know where to put ‘em and what they need to do for the music.

There’s no 12 step process to a great mix contrary to what some guys might say. In fact, there’s a reason why when you really start looking at those A-list guys, a lot of them just kind of throw all the faders up in the mix and get to work.

Robert Scovill was on Deane Ogden‘s podcast earlier this year, and I think this quote from him really sums this up:

I’m always telling guys, quit studying the technology. Study the music. I can teach a monkey to operate the technology. I can’t teach a monkey to evaluate a piece of music and have sensibility with how to construct it.

Andy Wallace is one of my favorite mixers, and he recently did an interview in Sound on Sound which was pretty cool in itself because I think this marks the 3rd interview with him I’ve ever been able to find.

Andy Wallace has a very simple technical approach to mixing compared to many of the other guys these days who seem to get a lot of press. The bulk of any processing he does happens right on his console which is typically an SSL of some fashion. His outboard gear is generally quite small and limited to maybe a couple reverbs and delays and one or two outboard compressors. The bulk of what Andy Wallace does when he’s mixing is riding faders making what needs to be heard heard.

For me, a mix is about trying to find something that works and that makes the hairs on the back of my neck stand up, and believing in that.

Andy Wallace

I’m not saying gear doesn’t matter because it does, and good gear in good hands can definitely make a difference. However, at the end of the day what we do has to be about music and its emotion and power. There is no magical, mystical trick or set of tricks or magic boxes that will suddenly put your mixes in the same league as the A-List guys. You need great musicians and to bring the same musical sensibility to your engineering the band is bringing on stage. Going after anything else is like chasing ghosts.

0

Recalling Some Monitor Mixes

image

A question came up recently on Twitter on whether there is a way to store preset monitor mixes on VENUE. This is actually something I’ve been experimenting with this year so I guess the answer is this can “sort-of” be done, but there’s not an easy solution. Some of the newer digital consoles I’m seeing on the market are offering some easier ways to do this, but even with the ability I don’t think it will work for everybody. So let’s explore this a bit, and then I’ll get into the nitty-gritty on what I’m doing with the Avid VENUE.

If your situation is like mine, you have a pickup band. For those unfamiliar with the term, a pickup band is basically a group of musicians who get put together for a limited time. In our case, that limited time is our Sunday service so every week we have a different group of musicians. Most of the musicians are familiar to us and have played on our stages before, but the specific players change every week. If you are working with the same lineup of musicians every week, none of this is going to apply.

I started experimenting with storing mixes as best as I could in order to save time and help with soundchecks, and the results have been mixed. Regardless of what kind of console you’re using, let’s talk about why this might and might not work.

For starters, a mix is heavily reliant on gain staging. When our monitor engineers start their mix to each musician, they set that musician’s input fader at unity and the send to their mix at unity. Then gain is turned up until the musician is happy. Some folks may cringe at this method, but this is actually a tried-and-true method those of us who grew up in the analog world learned. Plus, in most cases, this gets a nice healthy signal level on that input.

However, I have seen issues in this approach at monitors due to varying impedance levels on different models of custom in-ears. Not all in-ears have the same impedance, and that can have an effect on gain staging. Most musicians like to keep their packs at a healthy, yet modest level so they can maintain overall level control of their mixes; this is the same idea as us keeping our faders near unity. But when a musician’s in-ears are more sensitive, they may require less gain to drive them to a comfortable level when their pack’s level is set where it’s comfortable to adjust.

There are different and arguably better ways this kind of gain staging issue should be dealt with in order to keep a healthy input level. However, in my world I’ve found that it’s a lot easier to just deal with it on the input because changing up gain staging in other areas such as the console’s mix output or transmitter sensitivity tends to cause problems in the following weeks because those settings tend to be deeper in equipment and often get overlooked by the next monitor crew until it’s too late.

Anyway, the point I want to make is when you’re trying to make mix presets you need to have consistent gain staging on all your inputs so that the gain of each instrument is consistent week-to-week. If one guitar player’s signal level is usually around unity and another’s is 10 dB down from that, a preset mix will not compensate for that difference. I hope that makes sense.

Another issue comes up because mixes are not just based on signal levels. Tonality is a very big part of the mix as well. The tonality differences between two different lead guitar players or keyboard patches or bass guitars can be enough to change someone’s mix.

Whether it’s monitors, FOH, or the studio, one way to think of a mix is as a jigsaw puzzle with the different instruments being the pieces. I use things like level, dynamics processing, and EQ to shape each of those pieces so that they all fit together into a final picture. However, no two pieces are ever exactly alike so when you swap out one piece for another, the new piece won’t exactly fit the same way as the old one. Guitar players all have their own tones and sounds. Keyboard players all have their own tones and sounds. Even drummers on the same drum kit tuned the same way will sound different. In my early days of digital console use I used to try and store presets for each musician at FOH, but they never worked because a musician’s sound is as much a part of their own creation as the sounds that are around it in the mix.

The point I want to make is, your mileage may be limited if you try and do entire presets of mixes. That said, I have found some benefits for trying to store mixes.

The biggest win is we can get each musician’s gain staging and/or EQ pre-dialed to themselves. This is proving especially helpful with more complicated instruments such as drums. We always start our soundchecks with the drums, but I have seen drummers walk in and be ready to go after doing a very quick soundcheck since implementing this. Any tweaks to the drum kit mix are also pretty minor and quick as a result.

Minor wins include instrument panning. Every musician wants stuff in their mixes panned differently. By getting that stored ahead of time, it cuts out a step during soundcheck. Programming the console each week might also be a little faster since my recall workflow automatically labels a lot of things for me.

There might be some other little benefits, but as you can see this isn’t necessarily earth-shattering or game changing. I have seen it speed up soundchecks at times, but I’ve also see it slow things down so I’m still a little bit on the fence if this is actually a better method of working than our traditional way of pre-dialing mixes. At a minimum, I would keep the drum mixes, though, because they typically seem to take the most time to dial in.

So let’s get into VENUE specifics. I’ve been looking at this as a way to store custom pre-dialed mixes for musicians so I think of these are starting points, but not final mixes. I’m accomplishing this using snapshots.

Each musician that plays on our East Auditorium stage gets his/her own snapshot. On Monday or Tuesday I’ll take the previous Sunday’s show file and create a new snapshot for anyone new to the stage that week while also updating existing snapshots for musicians who had a really good day. Each snapshot is named first with that player’s role followed by their name so I can keep them all sorted to which mix they were on. With that housekeeping done, I have a snapshot saved that basically wipes the desk back to our default monitor mix start position with mixes pre-dialed based on our traditional method. Then I overwrite our template so that I have the new library of mixes stored in the template. This way we can start a rehearsal either from our traditional position or with the custom pre-dials recalled.

Simple so far. Recall is where it gets complicated.

Each VENUE snapshot stores a snapshot of the entire console’s settings with the exception of plugins. This is great for most applications, but when it comes to only recalling a specific aux-send or variable group mix it isn’t an ideal solution. So, for example, if I were to just recall each musician snapshot, I would not only recall that musician’s mix but every other mix and parameter on the console. So each time I recalled a mix snapshot, it would wipe the desk. There is a way around this using VENUE’s snapshot Scope and Recall Safe’s.

To start with, each musician’s snapshot has the following parameters in Scope: EQ, DYN, PRE, NAME, AUX MON, and PLUG-INs. These are used to recall specific settings for that musician such as their pre-amp settings, naming, and specific plugin settings. For example, some of our drummers have specific reverb and compression tastes so those settings are stored in the snapshot. Input and Output naming also gets stored here. Inputs for music specific inputs are also Scoped on the fader Inputs along with their specific mix Aux masters on the Outputs tab.

To prevent recalling things for every instrument and mix on the console, I make use of the console’s Recall Safes. I typically have everything safe’d on the entire console except for FADER, MUTE, and AUX MON on the Input safes. I started doing this when we installed the console so that in the event that we want/need snapshots for something, our engineers can be very specific in which mixes have snapshots recalling.

Next I created custom Recall Safe Scope Sets for each monitor mix. These custom sets turn off the recall safe settings for Mic Pre’s, HPF, EQ and Dynamics settings, and the input name on the Inputs side for that musician’s specific instrument(s). On the outputs side, the Scope Set turns the safe’s off for CH SENDS, OUT, and NAME. So these safe sets basically recall the input settings for a musician, label their input(s), recall the relative aux-send levels and panning for their entire mix, and then label their output on the console with their name.

Now, while it was a little complicated to setup, it’s actually not so bad in practice. If I want to load someone’s pre-dial settings, I select the Scope Set for their position(ex. drums), select their personal snapshot, and hit recall. I wish it was as simple as just recalling the snapshot, but it’s still not that many steps.

If you’ve got any questions about any of this, though, please feel free to add them in the comments below.

0

Mixing Mindset: Near and Far

IMG 3540

My good friend, Jason Cole was in town recently and came out to a service so afterwards we were obviously chatting about the mix. Jason said the service all sounded great, but one thing he mentioned is he would have used a longer reverb on the closing song we did. So I explained to him why I decided to use the reverb that I did.

This got me thinking that maybe it might help some of you guys reading these musings to hear about how some mix decisions get made. Robert Scovill recently wrote a great article over at Heil Sound on how aspiring engineers should focus more on learning about music and not just the gear that makes it happen. So I thought maybe I’d try and start delving into some of the things that go through my head when I’m putting together a mix, and I’ll try and post examples whenever possible.

Now as I’m talking about this kind of stuff, you might not agree with my interpretations of things or hear things the way I do. That’s OK. You should hear things the way you hear things. The point I want to make with this kind of an article is that there is a why to the mix decisions I’m making. I’m not just throwing up faders so you can hear everything. However, at the same time keep in mind that what I’m doing comes from a musical place so it’s much more intuitive for me than trying to write about this will make it seem. I don’t stand at the console thinking about this stuff. I might listen over and over and over again, but more often than not I’m responding to what I’m hearing. Articles like this one will be an attempt to verbalize that instinct as much as possible.

So, let’s start with that closer we did. The song was called Reason to Sing and was originally released by All Sons and Daughters. We performed it stripped down with just piano, acoustic guitar, and a couple vocals. You can listen to the FOH mix embedded here, although, depending on how you’re reading this, you might need to head over to my actual website for the stream.

In The Meantime -Part 1 – Closer from North Point Online on Vimeo.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Context is a big deal to me when mixing. It starts with the context of the song itself. In other words, what was the song originally conceived to be? What is it trying to say? How should it make the listener feel?

Then there’s the context of the performance. How is the song actually being played and sung?

When it comes to cover tunes, though, the reality is that performance context doesn’t always match the original conception of the song. For example, have you ever noticed how singers on singing contest shows like American Idol and the Voice smile a lot no matter what the song is? This seems to be most prevalent in the early stages of the show, and then most of those people get eliminated.

The problem I see and hear in a lot of these shows is singers being handed songs they just don’t understand, and the tone of their performance doesn’t match the intended tone of the song. For example, whenever I see people smiling when they sing Faithfully by Journey, it drives me nuts because I feel like they don’t understand the song. That song is not a happy song. It’s about being homesick. It’s about the pain of distance in a relationship and enduring that distance. There’s a reason why the line “We all need the clowns to make us smile” is in there; it’s because it’s not a smiley topic and not a happy time. The song is hopeful in the end, but it’s not a happy song. When I listen to Steve Perry’s original vocal, I hear a desperation in it, and you can see this on his face in some of the live videos floating around the interwebs.

So, in terms of mixing, the context of the performance is important because as a mixer I need to figure out which context to push the mix towards. Do I get behind the performance, right or wrong, or do I try and steer the song back towards its intended context?

Then there’s the larger context of how the song will be heard. In the case of a live mix: Is this is a one-off performance? Is it part of a set of music? Will the audience likely be participating? Is the audience going to be sitting or standing and listening? Etc. etc. etc.

So, in the case of Reason to Sing, we had a single song to be played at the end of the message in our service. The message was the first of a new series titled In The Meantime about what to do when you’re stuck in a situation where you feel like nothing is happening and God isn’t saying anything. When I was starting to mix the song, I didn’t know exactly how the message was going to land, but I knew the series was probably going to be a heavier topic and we’d probably be coming out of a serious moment.

I spent a lot of time with the song in virtual soundcheck trying different things even though it is a very simple mix in terms of instrumentation. I find transitioning to music out of a message is always a challenge for a couple reasons. For starters, the level of speech towards the end of a message is typically pretty low. Most of our speakers start their messages wide and narrow and focus in towards the end of their messages. As they do this their tone and volume tends to get more subdued so by the end things are quiet and more intimate.

The other challenge going into music is acoustic space. Spoken word in our rooms is always as dry as can be, but music has different needs. So the challenge is transitioning from bone-dry speech to a wetter music without calling attention to the music’s space.

A big part of my virtual soundcheck experimentation for this song was spent on effects. I initially tried some longer and wetter settings, but they just didn’t feel right to me. I felt the song needed to be very intimate when it started. I felt this fit the context of where it was in the service and also fit the feel and tone of the song. It wasn’t supposed to be a performance song. The message of the song outweighed that.

In my opinion, the difficult thing about longer and larger reverbs is they tend to push things back and away. You can kind of counter this a bit using longer pre-delay times, but longer, audible reverbs still put things in a defined space. I initially wanted a natural space around the vocal, but I wanted the vocal more forward and the space not necessarily defined. I wanted the vocal to feel more like it’s just in the space of the room it was supposed to be heard in.

This brings up another thing I should mention. My mix was being sent to all of our campuses so I also knew that their unique acoustics were going to add in to the mix in those rooms. That doesn’t mean I wanted to leave everything completely dry, though. That just means I wanted things to be more subdued.

image

I used a mix of things for vocal effects on this song. I believe I had a 2 second plate, a hall that was a little longer than 3.5 seconds, and then a 1/8 triplet delay set to the tempo of the song. The predelay on the plate was probably in the 30ms range while the hall was probably more like 100ms-ish. I think I started with some of my own presets for the plate and hall, and I just made the hall longer. I didn’t spend a lot of time messing with the balance of the different effects. I think I started with just the plate and wanted a little more so I added the hall, and then I feathered the delay in with it all until I liked the blend. Then it was just a matter of how much of the overall blend I wanted.

I also had an additional hall I added to the second vocal that was probably in the 2 second range. I like doing this at times with backgrounds because it helps keep the lead in front of the backgrounds. You can see the blend I was using for all the vocal FX in the photo here.

I controlled all of the FX returns with a VCA which I used to ride them through the entire song. I started with them back a bit for that intimacy, and in the second verse brought everything up a bit as the song got fuller with the added vocal and acoustic. At this point the song moves into not just being about the lead vocal and the vocals felt like they needed to sit back into the whole thing a bit.

So, here’s something for me when I’m working with depth. I don’t necessarily approach depth in terms of giving someone the ability to localize a sound. Stereo in live sound doesn’t work the way it can when you’re sitting in the sweet spot of a great studio, so the 3D localization cues aren’t always there for a listener. For me depth is more about foreground, background, and the space in between. Using depth this way allows me to turn things up so they are loud without making them the main focus.

For example, in this mix I added reverb to the instruments. You can hear the before and after embedded here.

Instruments – Dry

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Instruments – Wet

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

This kind of depth/reverb is often a subtle thing to me because it’s supposed to be subtle. When I’m approaching this in a mix, I’m often just trying to get things to feel right because when everything feels right the listener’s attention isn’t drawn to specific things. He just hears the song.

Another thing I should mention related to depth is the tone of the piano here. Personally, I gravitate towards brighter pianos, but on this song I left the piano darker at the suggestion of one of our music staff members. I think it was the right call, too, because leaving the piano darker helped put it behind her brighter vocal.

As the song moves into the bridge, its intensity builds. I had my hands on every element in the mix riding the instrumentation down a bit while also pushing the vocals up and adding more FX to them. The challenge here was to keep the instrumentation from completely overtaking the vocals. If I mixed this again I’d have pulled the piano back a little more, although, I like the intensity having it a little hot brings. It feels a little out of control to me, and I think that fits the song and performance in that spot.

When the song moves into the last chorus and the instrumentation drops back, I pulled the FX way back so the vocal is even drier than at the beginning of the song. I wanted it to feel like she was coming forward into the room at that point to really punctuate what she was singing because the performance is almost the equivalent of going from a shout to a whisper at that point. I felt like drying up the vocal there brings it almost right to your ear.

I think this is a good song to look at in this way first because it’s so simple. But simple doesn’t mean it’s easy or a free ride. If you’ve got any more questions about my approach on this song, please add them in the comments.

On Mixing...
  • Earlier this year I was talking with an engineer friend of mine who got to spend a week learning from one of today’s A-Level studio mixers. It was cool to […]

    No Big Secret

    Earlier this year I was talking with an engineer friend of mine who got to spend a week learning from one of today’s A-Level studio mixers. It was cool to […]

  • My good friend, Jason Cole was in town recently and came out to a service so afterwards we were obviously chatting about the mix. Jason said the service all sounded […]

    Mixing Mindset: Near and Far

    My good friend, Jason Cole was in town recently and came out to a service so afterwards we were obviously chatting about the mix. Jason said the service all sounded […]

  • Not too long ago I was chatting with some engineers, and the topic of parallel compression came up. Most of the guys I was talking with seemed to have abandoned […]

    QuickTip: Drum Spanking

    Not too long ago I was chatting with some engineers, and the topic of parallel compression came up. Most of the guys I was talking with seemed to have abandoned […]