fbpx

Drum Time Alignment Demo

Drive 2008 Drum Kit

Here is the awaited demonstration of what I’ve been doing with time alignment on drums. The only caveat I have for you is to make sure you listen on a decent playback system. When I did this in the studio the results were clear, and when we listened back on our PA the results were clear, but when I listen on my MacBook Pro speakers it’s not so clear. So I guess if your sound system is a bunch of MacBook Pro speakers, you’re not going to get much out of this.

This is a quick demo so it’s only going to demonstrate the results of two mics, but it will give you an idea of what this does. Basically in the demo you’re going to hear the close mic on the snare along and the overhead with zero EQ on the inputs. You’ll hear each mic on its own and then a combination of the two. Finally I apply delay to the snare mic to time align it to the overhead by delaying it to the arrival of the snare in the overhead.

A couple of quick things I forgot to mention in the demo. When I play the 2 sources together with alignment, at the end of the sample the time alignment is turned on for the ENTIRE buildup and fill, but not the pre-chorus assuming you can tell where the pre-chorus is. I also mention that the delay used is about 110 samples. This is about 2.5ms. If you’re going to try this you probably don’t need to go all the way to the sample level. Since I do all my delay measurements in Pro Tools it’s very easy to work at the sample level, and I figure the closer I can get with initial measurements and settings, the better off I’ll be due to any shifting mics through the set from vibrating risers. However, you could probably get the same measurements by measuring the distance of your source to the two microphones and then using the difference to delay the closest to the furthest; remember 1 foot is approximately 1 millisecond of time.

[audio:timealignment-demo.mp3]

timealignment-demo.mp3

UPDATE

Here are some drums in context of a band mix. NOTE: This isn’t a FOH board mix. This is more of a “broadcast” style mix that makes use of the same alignment techniques.

[audio:drumcontext.mp3]
David Stagl

20 Responses to “Drum Time Alignment Demo

  • Chris
    16 years ago

    Dave – the difference is very clear to me using my in-ears. Thanks for this, it definitely helps to hear what I am looking for. I’ve recently spent some time fine-tuning my delays for drums and band and I’ve appreciated the results, but I think being able to hear the difference on and off is great. You mentioned you’re using a Rode NT4 for overhead. Are you using this for your FOH sound or was that just for the recording? Where is it placed and in what orientation?

  • The NT4 is used for everything I do live these days. I’m not sure what I would do if I were back in the studio because then I get to go nuts with room mics and all kinds of other excessive craziness. I just put the demo together in the studio since it was easier to do in there, but the tracks in use were all recorded live.

    The NT4 is about 3 feet above the snare centered over the kit sort of between the drummer and the drums, if that makes sense; it’s over the kick, but cheated a bit towards the drummer. Then I rotate it a bit clockwise so that the capsules sort of aim across the kit with one pointed towards the drummer’s left crash and the other capsule towards the floor tom. This was a Scovill suggestion, and I like it because it gives a nice image of the whole kit. Some guys don’t like their overheads that high, but I like it because it gives me a kit sound–that’s my studio background, I guess–which is nice because every once in a while I might have a gate that’s too tight on a tom and this way I don’t completely lose them.

    This actually brings up a good point. My overheads aren’t just for cymbals. I use them for the entire kit. It’s not for everybody, but I like it. Keep this in mind: when I’m mixing drums, I put up the kick and the overheads first and then start sliding in the close mics.

  • Jeff T.
    16 years ago

    Hey – thanks for sharing your knowledge in such a clear way! I have an ignorant question for you:

    Can you only pull this off using digital boards? We run an analog 48-channel Midas.

    Thanks!

  • Jeff, It’s going to be a lot easier to pull off with a digital console that has a discrete digital delay for each channel. In analog world you could start inserting delays on every channel, but I would think it’s going to get a little hairy in terms of accurately lining things up because it’s going to be initially challenging to measure the latency of the delay you’re inserting.

    If you like what you hear with the demo, I still wouldn’t sweat it if you can’t do this quite, yet. We’ve been mixing for years without this ability and have gotten by. When I mix in the analog domain in our Series 5 room, I don’t do any of this; I just handle the drums “old school”.

  • Kevin Poole
    16 years ago

    Dave,

    Can you outline for your whole kit what your delays are for each open mic? What is your zero time point? (Kick?, overheads?) First time I saw this done was at LU when Ryan Lampa (Tait and TobyMac) did it on a 5d. I think he delayed everything to kick. It was a long time ago though.

  • The delays change week-to-week depending on how each drummer sets up his kit. Zero point is the overheads. I do the snare and toms on their own and then tend to use the delay I add to the snare channel as a generic delay time that I apply to all instruments; my thinking is our band is playing more to the close mics than the overheads, and I want to maintain their performance timing. So in the case of the kick, I will just delay it the same as the snare since the bleed into the overheads is usually not significant enough to induce audible comb filtering.

    One other thing I’ll do is make delay adjustments for multiple mics on each drum. For example, I will modify the delay on the beta91 in the kick to line it up with the Sennheiser 902. Same goes for the top and bottom snare mics.

  • brent(inWorship)
    16 years ago

    The difference is huge.

    We’ve begun playing with delays, based on your ideas from stage to FOH and we’ve really liked the differences. This is not something we’ve messed with yet, but I can see how it could really open up some sonic options for us and give us better audio to work with from the beginning.

    One thing that would be different for us is that we are using a Clearsonics cage. It is a necessary evil for the size of our room. We are able to ge the sound we want out of it and have narrowed our overheads down to give us the tone we want coming from that space. But, we have to close mic in the situation because of the live nature of the cage. Pulling the mics of the overheads for us gives us too much kit and not enough overheads. Definitely a balance for us.

    I think I will give some of the drum mic delays a try this week. Thanks!

  • Wow. It also sounded like there was much more low frequencies when they were lined up. (Maybe just my headphones: Sony MDR-7506.)

    Usually however, you don’t need to do this in a mix because you layer guitars, bass, keys and vocals and all of the sudden you have so much sonic material where things are masking frequencies all around that it doesn’t make too much a difference whether the drums are in perfect phase coherence.

    To Kevin Poole:
    One thing I do for the kick is that I will put the microphone in front of the kick then cover that area with blankets or some foam panels to keep the cymbals from bleeding in to the mic or the kick bleeding into the overheads.

  • JB, you’re right, there is more low frequency stuff when lined up, but I don’t necessarily agree that getting that back is always a negative. The comb filtering that takes place between a snare mic and an overhead can, at times, be beneficial. But bear in mind that whenever you’re dealing with comb filtering it’s happening across the frequency spectrum. The engineer just needs to decide which way he wants to deal with things. Either way can work.

  • Jeff T.
    16 years ago

    Thanks for the answer, Dave. I know this is another totally ignorant question but is this technique used on quality studio recordings of drum kits?

  • Wow. Even on my little MacBook speakers I can hear a huge difference. I’m in the same boat as Jeff T. though. Still analog. Still old school for me.

  • I have heard of this being done in the studio. Although, the difference in the studio is that guys employing this typically just shift the audio right on the timeline. In fact, here’s a studio trick you can do if you don’t have a stereo mic for overheads, and you don’t want to do an XY thing with a pair of mics. Once you get your first overhead where you want it, take a piece of string and measure off the distance from the snare to the mic. Now, using the string, position your second overhead keeping it the same distance from the snare as your first overhead. It might look a little weird, but you’ll get the snare phase coherent in your overheads.

  • I just updated the post to add a brief sample of drums in the context of a band mix.

  • Jeff T.
    16 years ago

    Thanks, Dave! That’s a great studio tip.

  • Kevin Poole
    16 years ago

    I read an article (http://www.prosoundweb.com/live/articles/daverat/polarity.shtml) by Dave rat where he talks about phase coherence of a drum mix as it relates to PA on Stage (Side Fills) and House. Do you have a bottom snare mic in your input list, is it phase reveresed, and do you notice any differences in your delay times due to this? Would it even effect it? (I’m just full of questions arent I?)

    Thanks Dave for taking time with this. I’m learning and sharing your wisdom with others…

    KP

    To JB: In the studio, blankets are no big deal, but in a live application we use a shield and the shield of course tends to funnel cymbals into just about everything, we have treated shields in the past which helps a little but aesthetics seems to be the priority for our ministry, not sonic goodness 🙂 Thanks for the advice.

  • Chris
    16 years ago

    Dave – slightly off topic, but along the lines of Kevin’s post: Do you reverse the phase on the mic inside the kick drum? If so, is it in phase with either your top or bottom snare?

  • I do use a bottom snare mic and the polarity is generally flipped on it. It’s always worth it to check the polarity both ways, but for me it’s generally basic physics that the bottom is going to be out of polarity with the top. Think of it this way: as the drummer’s stick contacts the top head, the head is moving away from the top mic for the initial impulse. Conversely, the drum head is moving towards the bottom mic for the initial impulse.

    9 out of 10 times, the snare arrival is late in my bottom mic so I generally take the difference in delay between the top and bottom mic and SUBTRACT it from the bottom mic to compensate for it being late which in turn gets it in time with the overheads. I typically just look at the waveform in ProTools to check polarity and set it appropriately for summation.

    I’ve tried the Dave Rat thing with flipping polarity on the kick drum with mixed results. Something to keep in mind if you’re trying that out is to make sure you stay in polarity with the BASS guitar. On my system in our east auditorium, I know that the polarity on our subs is reversed in the system processor; that was something I found a while back while looking at the phase trace in Smaart.

    The great thing about polarity is most consoles I work on these days have a button for it so it’s always an easy thing to check. Never hurts to pop it in and out and listen for what’s better even when the displays and measurements say something else.

    Flipping polarity(not phase–phase is related to time) shouldn’t affect delay times. I think of it this way. Flipping polarity is like looking at a wave in a mirror. A visual representation of the waveform is “upside-down”, but the phase isn’t really different. Phase is related to time and where a wave is within its cycle when offset from a specified reference point in time. When we talk about phase, we are talking about the phase of the wave. When the polarity is flipped, the cycle of the wave hasn’t started at a new time. When discussing phase in relation to signals, you always need two where one is the reference. When discussing polarity, however, you only need one signal. Are you getting confused, yet?

    So how do you tell if things are out of phase or polarity or both? With a multi-track recording in a DAW, it’s easy to see the waveforms; out of time will be out of phase and upside down waves will be out of polarity. When I’m optimizing a system, I use a program called EASERA SysTune which is comparable to SmaartLive. Within SysTune, I will capture traces and pay close attention to the phase plot along with the Impulse Response. Sources that are out of phase will have IR’s that are offset by time. For polarity, a careful examination of the impulses will display the polarity relationship, and the phase traces will also show things 180 degrees apart.

  • Kevin Poole
    16 years ago

    I tried this last night, it worked great. I wish that all consoles we owned were digital, I would implement this all the time. I was a little concerned about using it in a live setting, whether or not it would effect performance of the musicians at all. I know in my mind that 2-3ms of time is not enough delay different to make a musical difference, but the musician in me was still hesitant. Glad to say that I’m a fan of this technique. Mixing to in ears makes fine tuning very easy. I was able to hear the “samples” (think that is what you called them) line up, and the filtering disappear (I’m still not comfortable with all the terms) Thanks Dave for your wisdom and willingness to teach.

  • Hey Dave, I really like the way everything in the “broadcast” mix sounds. You can hear every detail. And I was just wondering what you used to record that and what effects and(or) processors you used to run things through?

  • Hey, Mark, sorry for the delay in responding. It was recorded with an Amek Media 51 into a Pro Tools HD4 rig. Interfaces were a combination of 192 IO’s and 96 IO’s. I honestly can’t remember the specific processing that gets used on everything, but I think it’s stuff like Impact on the kick drum, CompressorBank on the snare and probably overheads. Maybe the Focusrite EQ plugin in a couple places–D2 I think–but most of the EQ stuff I’ve been doing in ProTools has been using the EQIII plugin since it’s pretty much the same EQ as the Venue’s EQ. Verb is ReverbOne, I think, but if I had Revibe or the TLL verb plugin on the rig, I would opt for that instead.