The Latest Posts

Back to My Roots

IMG 3375

Lately I’ve been dabbling in post a bit which has been fun for me since the early years of my career were spent in that world. If you’re unfamiliar with what I mean by “post”, basically I’m doing post-production mixing/sweetening of audio for video. In this particular case I’ve been handling the post side of the audio for most of our baptism videos.

If someone wants to get baptized at North Point, one of the requirements is to make a video testimony. The videos are typically 1-2 minutes where the person being baptized tells their story of how they came to faith. Twice a month we perform baptisms in our services, and typically baptize 2 people in each service. The videos play right before each baptism with our keyboard player underscoring live from the stage. You can see some samples from our services here.

My job has been to get the audio for the story videos ready for the service. I usually do a little bit of cleaning up any edits done by our video editor. Then it’s basic mixing/sweetening. The quality of audio coming from the video shoots has dramatically improved since I came on staff, but there is typically still some cleanup.

I believe the audio is being captured via a Schoeps shotgun mic these days, which sounds very natural to me to start with. However, the challenge is these videos aren’t filmed on a dedicated soundstage and the people telling their stories aren’t professional talent, so there can be varying degrees of noise to deal with along with inconsistent levels and/or tonality.

I receive the audio in the form of an OMF file which I import into Pro Tools 10. I do all this work right in the auditorium using my FOH Pro Tools rig which allows me to go back and forth between the PA and my nearfields so I can make sure things will work in the room, but also still be good for broadcast.

Step one is clean-up. For a long time each of the story videos was a single take, but a few years ago we started using B-Roll in the stories which gives our editors the opportunity to make cuts if needed. Sometimes the audio side of these edits isn’t the cleanest. They might cut in the middle of a breath or create a pop, so my first task is to clean all this stuff up. Sometimes the video editor might have removed pieces of audio altogether so I might have to recreate presence or “room tone” in those gaps in order to smooth everything out. The beauty of getting the OMF is that the edits I receive are all non-destructive so I can adjust the editor’s audio “cuts” as necessary to make things sound the most natural.

Once that’s complete I import my template into the Pro Tools session using the Import Session Data function in Pro Tools. This sets up stuff like my mix bus and input channel processing. Since everything gets recorded in the same room with the same equipment, I’ve developed some basic settings that seem to work well to start.

My current signal chain on the baptizee is typically a compressor(Waves CLA-2A) into an EQ(Waves Renaissance EQ) followed by a first stage of noise reduction(NS-1). The Mix Buss has a Waves L2 for limiting followed by another stage of optional noise reduction(Waves WNS) and finally loudness metering(Waves WLM+).

The EQ stuff I do is pretty subtle. It’s a high-pass and some little bits of EQ here and there. The compressor does the bulk of the work for me usually doing around 3 dB of compression, but I’ve hit it fairly hard at times and been happy with the results. The CLA-2A is really smooth, but that also means it can be slow at times so I also use Pro Tools clip gain to adjust individual words from time to time. For example, a lot of people start really loud with “Hello”, and then taper off to a general level. In these cases I’ll use clip gain to bring the beginning down, and then just let the compressor do overall smoothing on the whole thing.

Noise is usually the biggest issue, though, with varying amounts of hiss from video to video. What I’ve found interesting is oftentimes the noise can be subtle on nearfields, but when I put things up on the PA it is very obvious. For most things Waves NS-1 works great, and it’s super-fast and easy to work with; who doesn’t love only having one control to deal with. NS-1 is like a souped-up expander for dialogue so it really just handles overall noise-reduction and tends to work on the subtler stuff. When there’s a lot of hiss to deal with I’ve been using WNS which is like a multi-band version of NS-1 so I can fine-tune the cleanup a little better only on the higher frequencies where the noise is most pervasive.

At times I have also experimented with some other restoration tools to try to deal with clicks and pops here and there, but I haven’t found anything in my current arsenal I’ve really been happy with quite, yet. Izotope’s RX3 is on my list of things to try, but I haven’t had a chance to demo it, yet. This entire process of me handling the post for these videos is still such a new thing so I want to make sure it’s going to be a longer term thing before I invest in something else for noise reduction and restoration.

The final link in my chain is the Waves WLM+ Loudness Meter, and I use the ATSC/85 2013 preset. I don’t need to use a loudness meter for these because there aren’t any specs to really comply to. However, I find this very handy to keep each video at a consistent loudness. Mainly I watch the “Range” to keep each video consistent, and this typically lands at 4 LU for me. Then I just use the trim feature on the plugin to instantly trim the overall level up or down to hit the standard I’m using which works great to keep my final level consistent with all the other videos.

After this is all done, I bounce out the final mix. I never use “Bounce to Disk” for this, though. I always record the final mixes live to a new audio track so I can hear them actually going down. Maybe I’m just old fashioned because I started out in the days when you had to listen to things in real-time, but I just always feel safer listening to what I’m printing. This also makes it easy for fixes later on because I can just punch in the spot that needs fixing, consolidate the clip, and export it.

Right now the whole process probably takes me 15-30 minutes per video depending on how much cleanup is involved. However, the more of these I do, the faster I’m getting with them. Pro Tools has come a long ways since I was doing this sort of thing on a daily basis, and they’ve definitely got some features now that are helping me get a more efficient workflow. It’s also been interesting for me because there will be times when I can’t remember how I used to do certain things in Pro Tools, but sometimes when I get into the zone of editing I subconsciously start hitting things and remember some of the editing tricks I used to use.

It’s been a nice change of pace to do a little bit of post lately, and I’m looking forward to doing more of this.


PA 2013 2014 – Making It Better

As you may recall, last September we installed a new PA in our West Auditorium. By and large we’ve been very happy with the way the new system has been performing. I really worked hard not to do much to it after the initial few weeks of the install so we could break it in and live with it for a while. As everything has settled in, one area we realized we weren’t satisfied was with the performance of our outfills.

PA’s like ours are always an interesting animal to me. Last time I counted I believe we are just shy of 50 boxes in the system spread across the mains, subs, and fills. It’s not really a complex system, but there’s a good bit of it and I think whenever you start crafting big systems like this you’re bound to run into places where you have to pick some sort of compromise.

When we installed the outfills, I had some reservations about the initial positioning of the boxes. We talked through the pro’s and con’s of repositioning them, and ultimately we left them where they were with only a minor adjustment. Based on the feedback I’ve received on this area of the system along with my own persisting uneasiness, I think I made the wrong call, though. It happens.

A couple months ago I started examining this area of the PA to see if anything could be done. My first step was a lot of measuring and listening to things and tinkering around in the DSP. Through this it became pretty evident there was nothing electronic that was going to fix things so I started talking with Clark who designed and installed the system.

My friend Ed Crippen over at Clark came out about a month into the process to investigate things himself and almost immediately confirmed my belief that somewhere in the process our outfills had landed in the wrong place. But like most things that go wrong, the reason why isn’t always so simple.

One of the original concerns some of us audio-folk had with installing a new PA in the West was related to acoustics. When the West Auditorium originally launched, it was an experiment; the West is our local video venue when it comes time for the message on Sundays. Nobody knew if the video-church concept would work for speaking when it was initially launched so Plan B was to use the room as a multi-purpose room if video-church fell through. This informed the acoustic design of the space. Obviously, video-church worked because the concept originated at North Point has been adopted beyond our walls and churches, however, the acoustic design of the room has remained untouched.

The East Auditorium features a lot of absorption on the walls, but I was told there was initial concern about treating the West the same way because the possibility existed the room could one day become a gym. Acoustic treatment, as many of you probably know, can be very expensive, and I believe there were concerns about damaging it. So the room was built utilizing RPG DiffusorBlox® instead of traditional fiberglass-style absorption.

The West was always a livelier sounding room than the East using the original PA’s, and based on this there was quite a bit of debate amongst our production guys on the effectiveness of the DiffusorBlox┬«. I wanted to reserve final judgement on the acoustics as much as possible until after the PA went in because my friend, Chris Briley, had told me how great the room had been when he had brought in a couple concert-level rigs over the years. Still, when we were installing the new outfills, I can’t say I wasn’t a little gun-shy about the walls and this was part of the conversation we had on their position.

Part of the reason I didn’t push more on the initial outfill positioning was because we were concerned about what would happen if we pushed too much energy into the front- and side-walls of the room. I knew from experience in the East where there is a lot of treatment that the outfills energize the room and create some not-so-nice slaps, and that was always in the back of my mind during the initial install.

The riggers pre-rigged the outfills a week or two before the actual install and repositioning them differently during the install would have added half a day to a day of work for them on a day we were supposed to be full into optimizing the system. Moving the boxes would have also moved the optimization back a day which would have put me in a bind because Steve Bush from Meyer would not have been able to be on-site for the new last day leaving me to finish optimization on my own.

Could I have done it? Sure. But I knew I would feel a whole lot better about things if Steve was there for the entire process.

So, faced with the potential of adding a day to the process and putting myself in a less-than-optimal situation for a system launch, I opted against spending a day on something I thought might backfire on us. And it’s not like we didn’t listen to the outfills before we all signed off on the positioning . There was always intelligibility throughout the area, but we just weren’t getting wowed by the boxes.

Now, a lot of people probably wouldn’t have bothered to even look at making this area better and written it off as one of the many compromises of system integration. After all, this area of the room features the least desirable seating. However, aside from just trying to make things better which is a natural component of North Point’s DNA, that area of coverage really does matter to us quite a bit.

You see, our West Auditorium tends to fill up after the East is full. Because of the way the rooms sit, the majority of people enter the West at the front of the room on the far sides. This means just about everybody walks directly through the coverage of the outfills and often after programming begins, and we feel it’s just as important to greet everybody with a great audio experience as when they are in the seats.

Think about it: Your friend/neighbor/brother/etc. invites you to go somewhere, and you finally give in and accept the invitation. You get stuck in traffic and crowds and end up getting in late so now you’re walking into this place you’re not sure you really want to be at in the first place. If you walk in and you’re greeted with an experience that’s just “meh”, how would that make you feel compared to something that sounds great? We believe our services start the minute someone sets foot on one of our campuses so if there’s something we can do to make an experience great at every step of the way, we try and do it.

So Ed came back out a couple weeks ago with riggers in tow to move our outfills. The process took two days with most of the first day spent relocating the speakers. Ed and I started in the morning using Meyer Sound’s MAPP to model the refined positioning while the riggers dropped the first boxes. We ended up bringing the boxes onstage a couple of feet and rotating them a good bit to extend their coverage as far forward in the room as possible while also covering all the way to the side walls of the room. We finalized the position by listening to the coverage after the riggers got the boxes back up in air.

Once that was finished, we started working them into the system, however, this got a little more complicated than I initially expected it would. The original positions of the outfills had impacted the original optimization enough that we ended up touching up most of the other fills for the system. Fortunately we left the main arrays untouched largely untouched. The entire re-optimization probably took just under a day when you look at how we spread the process across two days.

I think the results are definitely an improvement on what was already a great PA with clarity and coverage on the far sides definitely improved. Through the re-optimization we were also able to tighten upthe low. I think this room might actually have a slight leg up on the East Auditorium which is something that has never even been suggested in my almost 8 years on staff. Now I’m sure the coming months will see me trying to see if I can get the East PA on an even footing with the West, but I have a feeling the West might always have an edge thanks to that DiffusorBlox┬« we were so concerned about. There’s something about the room that feels pretty good.


QuickTip: Drum Spanking


Not too long ago I was chatting with some engineers, and the topic of parallel compression came up. Most of the guys I was talking with seemed to have abandoned it. Nothing wrong with that, however, based on the way they were talking about it made me wonder if everyone quite understands what parallel compression can really do and how to make it work for fun and profit.

I think I’ve talked about this in the past so I’ll kind of summarize a bit. For me, parallel compression on drums is part of my drum sound. Basically, I feel like it thickens them up and gives them added weight and power in the mix. It can also be used to get them louder without substantially raising the peaks; in other words, it can save headroom in the system. It can also help bring out some of the subtlety in the playing without adding the side effects of compression. In a way it’s kind of a have your cake and eat it, too thing for me.

The traditionally talked about way of doing this that you’ll find on forums/videos/etc. is to take your drums(preferably minus cymbals) and send them to a couple of busses or groups. You leave one group clean or clean-ish and stick a compressor on the other one. Then you compress the snot out of the compressed/spanked group and blend it in with your natural drums.

So that’s one approach, and I guess it can work, but it never gave me what I was looking for with this.

I really owe this stuff all to Robert Scovill because he was the first engineer who opened up my ears on what this could do and a starting point to make it work. One of the biggest things I took away was to not compress the compressed buss so much. To this day, I still rarely use more than 6 dB of gain reduction on my compressed buss, but I’m more often around 3-4 dB or so.

If you’re trying to play with this, and you’re not sure about how to set it up initially, try this:

Turn off your uncompressed buss so all you hear is your “spanked” drums. Now just dial in some compression on them so they’re compressed a bit. You can go a little fast on the attack to take off the transient a bit, and then keep the release medium to fast to keep your punch. I’d recommend starting with a 4:1 ratio or so. You don’t need to hammer the drums, though. Just compress them a bit so you actually hear them as sounding compressed.

Once you get this dialed in, take the group out of the mix and go back to just your regular drums. Mix your regular drums in to about where you’d like them to sit in the mix, and then feather your compressed buss into the mix. As you bring in that compressed bus, your drums should gain authority in the mix.

But now my drums are too loud!

Well, if they’re a bit too loud, pull both of your groups back a bit to bring the overall level down, but before you do that go listen to some old Celine Dion stuff.

I’m being serious.



Actually, you know what? I’ll save you a trip. Here’s some Celine embedded right here courtesy of Spotify:

Now, if the drums in a Celine Dion mix are louder than the drums in your mix, I think you can turn your drums up.


Here’s a bonus tip. A popular design in a lot of modern compressors and especially plugin compressors is a wet/dry knob. The idea behind this is now you can do parallel compression without taking up an extra buss and then easily blend between the compressed and uncompressed. It’s definitely a cool feature to have, but I don’t think you should use that for this.


Well, what if you want to use some kind of vintage-y EQ with fairly wide filters to apply EQ to only the compressed drums? Maybe you want to bump 5k a little and maybe something below 100Hz. How would you do that if you use the wet/dry knob? Well, you wouldn’t be able to do it if you wanted to.

On Mixing...
  • Not too long ago I was chatting with some engineers, and the topic of parallel compression came up. Most of the guys I was talking with seemed to have abandoned […]

    QuickTip: Drum Spanking

    Not too long ago I was chatting with some engineers, and the topic of parallel compression came up. Most of the guys I was talking with seemed to have abandoned […]

  • My friend Andrew Stone has been writing a bunch of articles lately on how to ruin a mix, and I want to piggyback off his latest edition regarding to the […]

    QuickTip: An Empty Room

    My friend Andrew Stone has been writing a bunch of articles lately on how to ruin a mix, and I want to piggyback off his latest edition regarding to the […]

  • I learned a new trick this week while listening to the UBK Happy Funtime Hour, which in spite of the title never seems to be anywhere near an actual hour. […]

    QuickTip: Mix Swap

    I learned a new trick this week while listening to the UBK Happy Funtime Hour, which in spite of the title never seems to be anywhere near an actual hour. […]