The Latest Posts
0

Recalling Some Monitor Mixes

image

A question came up recently on Twitter on whether there is a way to store preset monitor mixes on VENUE. This is actually something I’ve been experimenting with this year so I guess the answer is this can “sort-of” be done, but there’s not an easy solution. Some of the newer digital consoles I’m seeing on the market are offering some easier ways to do this, but even with the ability I don’t think it will work for everybody. So let’s explore this a bit, and then I’ll get into the nitty-gritty on what I’m doing with the Avid VENUE.

If your situation is like mine, you have a pickup band. For those unfamiliar with the term, a pickup band is basically a group of musicians who get put together for a limited time. In our case, that limited time is our Sunday service so every week we have a different group of musicians. Most of the musicians are familiar to us and have played on our stages before, but the specific players change every week. If you are working with the same lineup of musicians every week, none of this is going to apply.

I started experimenting with storing mixes as best as I could in order to save time and help with soundchecks, and the results have been mixed. Regardless of what kind of console you’re using, let’s talk about why this might and might not work.

For starters, a mix is heavily reliant on gain staging. When our monitor engineers start their mix to each musician, they set that musician’s input fader at unity and the send to their mix at unity. Then gain is turned up until the musician is happy. Some folks may cringe at this method, but this is actually a tried-and-true method those of us who grew up in the analog world learned. Plus, in most cases, this gets a nice healthy signal level on that input.

However, I have seen issues in this approach at monitors due to varying impedance levels on different models of custom in-ears. Not all in-ears have the same impedance, and that can have an effect on gain staging. Most musicians like to keep their packs at a healthy, yet modest level so they can maintain overall level control of their mixes; this is the same idea as us keeping our faders near unity. But when a musician’s in-ears are more sensitive, they may require less gain to drive them to a comfortable level when their pack’s level is set where it’s comfortable to adjust.

There are different and arguably better ways this kind of gain staging issue should be dealt with in order to keep a healthy input level. However, in my world I’ve found that it’s a lot easier to just deal with it on the input because changing up gain staging in other areas such as the console’s mix output or transmitter sensitivity tends to cause problems in the following weeks because those settings tend to be deeper in equipment and often get overlooked by the next monitor crew until it’s too late.

Anyway, the point I want to make is when you’re trying to make mix presets you need to have consistent gain staging on all your inputs so that the gain of each instrument is consistent week-to-week. If one guitar player’s signal level is usually around unity and another’s is 10 dB down from that, a preset mix will not compensate for that difference. I hope that makes sense.

Another issue comes up because mixes are not just based on signal levels. Tonality is a very big part of the mix as well. The tonality differences between two different lead guitar players or keyboard patches or bass guitars can be enough to change someone’s mix.

Whether it’s monitors, FOH, or the studio, one way to think of a mix is as a jigsaw puzzle with the different instruments being the pieces. I use things like level, dynamics processing, and EQ to shape each of those pieces so that they all fit together into a final picture. However, no two pieces are ever exactly alike so when you swap out one piece for another, the new piece won’t exactly fit the same way as the old one. Guitar players all have their own tones and sounds. Keyboard players all have their own tones and sounds. Even drummers on the same drum kit tuned the same way will sound different. In my early days of digital console use I used to try and store presets for each musician at FOH, but they never worked because a musician’s sound is as much a part of their own creation as the sounds that are around it in the mix.

The point I want to make is, your mileage may be limited if you try and do entire presets of mixes. That said, I have found some benefits for trying to store mixes.

The biggest win is we can get each musician’s gain staging and/or EQ pre-dialed to themselves. This is proving especially helpful with more complicated instruments such as drums. We always start our soundchecks with the drums, but I have seen drummers walk in and be ready to go after doing a very quick soundcheck since implementing this. Any tweaks to the drum kit mix are also pretty minor and quick as a result.

Minor wins include instrument panning. Every musician wants stuff in their mixes panned differently. By getting that stored ahead of time, it cuts out a step during soundcheck. Programming the console each week might also be a little faster since my recall workflow automatically labels a lot of things for me.

There might be some other little benefits, but as you can see this isn’t necessarily earth-shattering or game changing. I have seen it speed up soundchecks at times, but I’ve also see it slow things down so I’m still a little bit on the fence if this is actually a better method of working than our traditional way of pre-dialing mixes. At a minimum, I would keep the drum mixes, though, because they typically seem to take the most time to dial in.

So let’s get into VENUE specifics. I’ve been looking at this as a way to store custom pre-dialed mixes for musicians so I think of these are starting points, but not final mixes. I’m accomplishing this using snapshots.

Each musician that plays on our East Auditorium stage gets his/her own snapshot. On Monday or Tuesday I’ll take the previous Sunday’s show file and create a new snapshot for anyone new to the stage that week while also updating existing snapshots for musicians who had a really good day. Each snapshot is named first with that player’s role followed by their name so I can keep them all sorted to which mix they were on. With that housekeeping done, I have a snapshot saved that basically wipes the desk back to our default monitor mix start position with mixes pre-dialed based on our traditional method. Then I overwrite our template so that I have the new library of mixes stored in the template. This way we can start a rehearsal either from our traditional position or with the custom pre-dials recalled.

Simple so far. Recall is where it gets complicated.

Each VENUE snapshot stores a snapshot of the entire console’s settings with the exception of plugins. This is great for most applications, but when it comes to only recalling a specific aux-send or variable group mix it isn’t an ideal solution. So, for example, if I were to just recall each musician snapshot, I would not only recall that musician’s mix but every other mix and parameter on the console. So each time I recalled a mix snapshot, it would wipe the desk. There is a way around this using VENUE’s snapshot Scope and Recall Safe’s.

To start with, each musician’s snapshot has the following parameters in Scope: EQ, DYN, PRE, NAME, AUX MON, and PLUG-INs. These are used to recall specific settings for that musician such as their pre-amp settings, naming, and specific plugin settings. For example, some of our drummers have specific reverb and compression tastes so those settings are stored in the snapshot. Input and Output naming also gets stored here. Inputs for music specific inputs are also Scoped on the fader Inputs along with their specific mix Aux masters on the Outputs tab.

To prevent recalling things for every instrument and mix on the console, I make use of the console’s Recall Safes. I typically have everything safe’d on the entire console except for FADER, MUTE, and AUX MON on the Input safes. I started doing this when we installed the console so that in the event that we want/need snapshots for something, our engineers can be very specific in which mixes have snapshots recalling.

Next I created custom Recall Safe Scope Sets for each monitor mix. These custom sets turn off the recall safe settings for Mic Pre’s, HPF, EQ and Dynamics settings, and the input name on the Inputs side for that musician’s specific instrument(s). On the outputs side, the Scope Set turns the safe’s off for CH SENDS, OUT, and NAME. So these safe sets basically recall the input settings for a musician, label their input(s), recall the relative aux-send levels and panning for their entire mix, and then label their output on the console with their name.

Now, while it was a little complicated to setup, it’s actually not so bad in practice. If I want to load someone’s pre-dial settings, I select the Scope Set for their position(ex. drums), select their personal snapshot, and hit recall. I wish it was as simple as just recalling the snapshot, but it’s still not that many steps.

If you’ve got any questions about any of this, though, please feel free to add them in the comments below.

0

Mixing Mindset: Near and Far

IMG 3540

My good friend, Jason Cole was in town recently and came out to a service so afterwards we were obviously chatting about the mix. Jason said the service all sounded great, but one thing he mentioned is he would have used a longer reverb on the closing song we did. So I explained to him why I decided to use the reverb that I did.

This got me thinking that maybe it might help some of you guys reading these musings to hear about how some mix decisions get made. Robert Scovill recently wrote a great article over at Heil Sound on how aspiring engineers should focus more on learning about music and not just the gear that makes it happen. So I thought maybe I’d try and start delving into some of the things that go through my head when I’m putting together a mix, and I’ll try and post examples whenever possible.

Now as I’m talking about this kind of stuff, you might not agree with my interpretations of things or hear things the way I do. That’s OK. You should hear things the way you hear things. The point I want to make with this kind of an article is that there is a why to the mix decisions I’m making. I’m not just throwing up faders so you can hear everything. However, at the same time keep in mind that what I’m doing comes from a musical place so it’s much more intuitive for me than trying to write about this will make it seem. I don’t stand at the console thinking about this stuff. I might listen over and over and over again, but more often than not I’m responding to what I’m hearing. Articles like this one will be an attempt to verbalize that instinct as much as possible.

So, let’s start with that closer we did. The song was called Reason to Sing and was originally released by All Sons and Daughters. We performed it stripped down with just piano, acoustic guitar, and a couple vocals. You can listen to the FOH mix embedded here, although, depending on how you’re reading this, you might need to head over to my actual website for the stream.

In The Meantime -Part 1 – Closer from North Point Online on Vimeo.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Context is a big deal to me when mixing. It starts with the context of the song itself. In other words, what was the song originally conceived to be? What is it trying to say? How should it make the listener feel?

Then there’s the context of the performance. How is the song actually being played and sung?

When it comes to cover tunes, though, the reality is that performance context doesn’t always match the original conception of the song. For example, have you ever noticed how singers on singing contest shows like American Idol and the Voice smile a lot no matter what the song is? This seems to be most prevalent in the early stages of the show, and then most of those people get eliminated.

The problem I see and hear in a lot of these shows is singers being handed songs they just don’t understand, and the tone of their performance doesn’t match the intended tone of the song. For example, whenever I see people smiling when they sing Faithfully by Journey, it drives me nuts because I feel like they don’t understand the song. That song is not a happy song. It’s about being homesick. It’s about the pain of distance in a relationship and enduring that distance. There’s a reason why the line “We all need the clowns to make us smile” is in there; it’s because it’s not a smiley topic and not a happy time. The song is hopeful in the end, but it’s not a happy song. When I listen to Steve Perry’s original vocal, I hear a desperation in it, and you can see this on his face in some of the live videos floating around the interwebs.

So, in terms of mixing, the context of the performance is important because as a mixer I need to figure out which context to push the mix towards. Do I get behind the performance, right or wrong, or do I try and steer the song back towards its intended context?

Then there’s the larger context of how the song will be heard. In the case of a live mix: Is this is a one-off performance? Is it part of a set of music? Will the audience likely be participating? Is the audience going to be sitting or standing and listening? Etc. etc. etc.

So, in the case of Reason to Sing, we had a single song to be played at the end of the message in our service. The message was the first of a new series titled In The Meantime about what to do when you’re stuck in a situation where you feel like nothing is happening and God isn’t saying anything. When I was starting to mix the song, I didn’t know exactly how the message was going to land, but I knew the series was probably going to be a heavier topic and we’d probably be coming out of a serious moment.

I spent a lot of time with the song in virtual soundcheck trying different things even though it is a very simple mix in terms of instrumentation. I find transitioning to music out of a message is always a challenge for a couple reasons. For starters, the level of speech towards the end of a message is typically pretty low. Most of our speakers start their messages wide and narrow and focus in towards the end of their messages. As they do this their tone and volume tends to get more subdued so by the end things are quiet and more intimate.

The other challenge going into music is acoustic space. Spoken word in our rooms is always as dry as can be, but music has different needs. So the challenge is transitioning from bone-dry speech to a wetter music without calling attention to the music’s space.

A big part of my virtual soundcheck experimentation for this song was spent on effects. I initially tried some longer and wetter settings, but they just didn’t feel right to me. I felt the song needed to be very intimate when it started. I felt this fit the context of where it was in the service and also fit the feel and tone of the song. It wasn’t supposed to be a performance song. The message of the song outweighed that.

In my opinion, the difficult thing about longer and larger reverbs is they tend to push things back and away. You can kind of counter this a bit using longer pre-delay times, but longer, audible reverbs still put things in a defined space. I initially wanted a natural space around the vocal, but I wanted the vocal more forward and the space not necessarily defined. I wanted the vocal to feel more like it’s just in the space of the room it was supposed to be heard in.

This brings up another thing I should mention. My mix was being sent to all of our campuses so I also knew that their unique acoustics were going to add in to the mix in those rooms. That doesn’t mean I wanted to leave everything completely dry, though. That just means I wanted things to be more subdued.

image

I used a mix of things for vocal effects on this song. I believe I had a 2 second plate, a hall that was a little longer than 3.5 seconds, and then a 1/8 triplet delay set to the tempo of the song. The predelay on the plate was probably in the 30ms range while the hall was probably more like 100ms-ish. I think I started with some of my own presets for the plate and hall, and I just made the hall longer. I didn’t spend a lot of time messing with the balance of the different effects. I think I started with just the plate and wanted a little more so I added the hall, and then I feathered the delay in with it all until I liked the blend. Then it was just a matter of how much of the overall blend I wanted.

I also had an additional hall I added to the second vocal that was probably in the 2 second range. I like doing this at times with backgrounds because it helps keep the lead in front of the backgrounds. You can see the blend I was using for all the vocal FX in the photo here.

I controlled all of the FX returns with a VCA which I used to ride them through the entire song. I started with them back a bit for that intimacy, and in the second verse brought everything up a bit as the song got fuller with the added vocal and acoustic. At this point the song moves into not just being about the lead vocal and the vocals felt like they needed to sit back into the whole thing a bit.

So, here’s something for me when I’m working with depth. I don’t necessarily approach depth in terms of giving someone the ability to localize a sound. Stereo in live sound doesn’t work the way it can when you’re sitting in the sweet spot of a great studio, so the 3D localization cues aren’t always there for a listener. For me depth is more about foreground, background, and the space in between. Using depth this way allows me to turn things up so they are loud without making them the main focus.

For example, in this mix I added reverb to the instruments. You can hear the before and after embedded here.

Instruments – Dry

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Instruments – Wet

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

This kind of depth/reverb is often a subtle thing to me because it’s supposed to be subtle. When I’m approaching this in a mix, I’m often just trying to get things to feel right because when everything feels right the listener’s attention isn’t drawn to specific things. He just hears the song.

Another thing I should mention related to depth is the tone of the piano here. Personally, I gravitate towards brighter pianos, but on this song I left the piano darker at the suggestion of one of our music staff members. I think it was the right call, too, because leaving the piano darker helped put it behind her brighter vocal.

As the song moves into the bridge, its intensity builds. I had my hands on every element in the mix riding the instrumentation down a bit while also pushing the vocals up and adding more FX to them. The challenge here was to keep the instrumentation from completely overtaking the vocals. If I mixed this again I’d have pulled the piano back a little more, although, I like the intensity having it a little hot brings. It feels a little out of control to me, and I think that fits the song and performance in that spot.

When the song moves into the last chorus and the instrumentation drops back, I pulled the FX way back so the vocal is even drier than at the beginning of the song. I wanted it to feel like she was coming forward into the room at that point to really punctuate what she was singing because the performance is almost the equivalent of going from a shout to a whisper at that point. I felt like drying up the vocal there brings it almost right to your ear.

I think this is a good song to look at in this way first because it’s so simple. But simple doesn’t mean it’s easy or a free ride. If you’ve got any more questions about my approach on this song, please add them in the comments.

1

Back to My Roots

IMG 3375

Lately I’ve been dabbling in post a bit which has been fun for me since the early years of my career were spent in that world. If you’re unfamiliar with what I mean by “post”, basically I’m doing post-production mixing/sweetening of audio for video. In this particular case I’ve been handling the post side of the audio for most of our baptism videos.

If someone wants to get baptized at North Point, one of the requirements is to make a video testimony. The videos are typically 1-2 minutes where the person being baptized tells their story of how they came to faith. Twice a month we perform baptisms in our services, and typically baptize 2 people in each service. The videos play right before each baptism with our keyboard player underscoring live from the stage. You can see some samples from our services here.

My job has been to get the audio for the story videos ready for the service. I usually do a little bit of cleaning up any edits done by our video editor. Then it’s basic mixing/sweetening. The quality of audio coming from the video shoots has dramatically improved since I came on staff, but there is typically still some cleanup.

I believe the audio is being captured via a Schoeps shotgun mic these days, which sounds very natural to me to start with. However, the challenge is these videos aren’t filmed on a dedicated soundstage and the people telling their stories aren’t professional talent, so there can be varying degrees of noise to deal with along with inconsistent levels and/or tonality.

I receive the audio in the form of an OMF file which I import into Pro Tools 10. I do all this work right in the auditorium using my FOH Pro Tools rig which allows me to go back and forth between the PA and my nearfields so I can make sure things will work in the room, but also still be good for broadcast.

Step one is clean-up. For a long time each of the story videos was a single take, but a few years ago we started using B-Roll in the stories which gives our editors the opportunity to make cuts if needed. Sometimes the audio side of these edits isn’t the cleanest. They might cut in the middle of a breath or create a pop, so my first task is to clean all this stuff up. Sometimes the video editor might have removed pieces of audio altogether so I might have to recreate presence or “room tone” in those gaps in order to smooth everything out. The beauty of getting the OMF is that the edits I receive are all non-destructive so I can adjust the editor’s audio “cuts” as necessary to make things sound the most natural.

Once that’s complete I import my template into the Pro Tools session using the Import Session Data function in Pro Tools. This sets up stuff like my mix bus and input channel processing. Since everything gets recorded in the same room with the same equipment, I’ve developed some basic settings that seem to work well to start.

My current signal chain on the baptizee is typically a compressor(Waves CLA-2A) into an EQ(Waves Renaissance EQ) followed by a first stage of noise reduction(NS-1). The Mix Buss has a Waves L2 for limiting followed by another stage of optional noise reduction(Waves WNS) and finally loudness metering(Waves WLM+).

The EQ stuff I do is pretty subtle. It’s a high-pass and some little bits of EQ here and there. The compressor does the bulk of the work for me usually doing around 3 dB of compression, but I’ve hit it fairly hard at times and been happy with the results. The CLA-2A is really smooth, but that also means it can be slow at times so I also use Pro Tools clip gain to adjust individual words from time to time. For example, a lot of people start really loud with “Hello”, and then taper off to a general level. In these cases I’ll use clip gain to bring the beginning down, and then just let the compressor do overall smoothing on the whole thing.

Noise is usually the biggest issue, though, with varying amounts of hiss from video to video. What I’ve found interesting is oftentimes the noise can be subtle on nearfields, but when I put things up on the PA it is very obvious. For most things Waves NS-1 works great, and it’s super-fast and easy to work with; who doesn’t love only having one control to deal with. NS-1 is like a souped-up expander for dialogue so it really just handles overall noise-reduction and tends to work on the subtler stuff. When there’s a lot of hiss to deal with I’ve been using WNS which is like a multi-band version of NS-1 so I can fine-tune the cleanup a little better only on the higher frequencies where the noise is most pervasive.

At times I have also experimented with some other restoration tools to try to deal with clicks and pops here and there, but I haven’t found anything in my current arsenal I’ve really been happy with quite, yet. Izotope’s RX3 is on my list of things to try, but I haven’t had a chance to demo it, yet. This entire process of me handling the post for these videos is still such a new thing so I want to make sure it’s going to be a longer term thing before I invest in something else for noise reduction and restoration.

The final link in my chain is the Waves WLM+ Loudness Meter, and I use the ATSC/85 2013 preset. I don’t need to use a loudness meter for these because there aren’t any specs to really comply to. However, I find this very handy to keep each video at a consistent loudness. Mainly I watch the “Range” to keep each video consistent, and this typically lands at 4 LU for me. Then I just use the trim feature on the plugin to instantly trim the overall level up or down to hit the standard I’m using which works great to keep my final level consistent with all the other videos.

After this is all done, I bounce out the final mix. I never use “Bounce to Disk” for this, though. I always record the final mixes live to a new audio track so I can hear them actually going down. Maybe I’m just old fashioned because I started out in the days when you had to listen to things in real-time, but I just always feel safer listening to what I’m printing. This also makes it easy for fixes later on because I can just punch in the spot that needs fixing, consolidate the clip, and export it.

Right now the whole process probably takes me 15-30 minutes per video depending on how much cleanup is involved. However, the more of these I do, the faster I’m getting with them. Pro Tools has come a long ways since I was doing this sort of thing on a daily basis, and they’ve definitely got some features now that are helping me get a more efficient workflow. It’s also been interesting for me because there will be times when I can’t remember how I used to do certain things in Pro Tools, but sometimes when I get into the zone of editing I subconsciously start hitting things and remember some of the editing tricks I used to use.

It’s been a nice change of pace to do a little bit of post lately, and I’m looking forward to doing more of this.

On Mixing...
  • My good friend, Jason Cole was in town recently and came out to a service so afterwards we were obviously chatting about the mix. Jason said the service all sounded […]

    Mixing Mindset: Near and Far

    My good friend, Jason Cole was in town recently and came out to a service so afterwards we were obviously chatting about the mix. Jason said the service all sounded […]

  • Not too long ago I was chatting with some engineers, and the topic of parallel compression came up. Most of the guys I was talking with seemed to have abandoned […]

    QuickTip: Drum Spanking

    Not too long ago I was chatting with some engineers, and the topic of parallel compression came up. Most of the guys I was talking with seemed to have abandoned […]

  • My friend Andrew Stone has been writing a bunch of articles lately on how to ruin a mix, and I want to piggyback off his latest edition regarding to the […]

    QuickTip: An Empty Room

    My friend Andrew Stone has been writing a bunch of articles lately on how to ruin a mix, and I want to piggyback off his latest edition regarding to the […]