Taking on the Space-Time Continuum

This week I took a stab at the time alignment thing. I did a little bit of research on this, and there are a lot of discussions about time alignment. There is actually a bit of controversy surrounding this because you can’t really align things for an entire room. If you have multiple speaker clusters, when you sit on the far left side of the room the sound from the right cluster is always going to be arriving after the sound from the left cluster arrives and vice versa. If you align the clusters so that both clusters arrive at the same time on a far side of the room, the alignment will be hosed for the rest of the room; there is no perfect solution for this. Differing arrival times can cause comb filtering. If you dig through discussion groups on the net, there is enough info out there to make your head hurt on where to time align things to, how to do it, and whether you should even bother. Here’s my take:

Live sound is always a series of compromises; in fact, if you’re mixing for a church your job might seem like it’s more about compromising than mixing. No matter how great your PA is, it’s never perfect. It was very hard for me, but I got over this a long time ago; the gig is to do your best to make it sound great for the largest portion of the audience and go to sleep at night with the results. FOH in our rooms is in a great location that is a good average for our audience, so I did all my alignment to FOH.

I did all of the alignment this week within the Venue. Some of it could (and really should) be done elsewhere, but our current sound system is up there in age, and given our present configuration the Venue was the easiest way to try most of this out. Some of the alignment could only be done thanks to Venue. We tackled two things this week: PA alignment and individual channel alignment.

Step 1: Fixing some PA alignment issues

This was sparked by my discussion with Robert Scovill. I’ve been very intrigued by his use of an LCR system on major tours, but without all the crazy cross-matrixed stuff that you read about (he actually wrote an article about it for MIX magazine a few years ago; you can find it via Google). Anyway, one of the things he said was it’s imperative that the center cluster/array is time aligned with the L & R clusters/arrays.

Our center cluster is slightly forward and below the L & R. Since I want to experiment with the LCR thing at some point, I wanted to make sure this was properly aligned. I did some measurements with Smaart and found that the L & R arrival times were about 4 ms behind the center cluster. I inserted a delay on the center cluster and the PA warmed up quite noticeably as a result of this.

Now, if you’re like me you might be wondering why this was never done before. Basically, our system in the east is about 8 years old and while there are definitely some coverage issues we’re going to fix with a future system upgrade, we’ve been pretty happy with the system as it currently stands. While I was very happy with the results, if I had to turn this stuff off we’d be fine.

Step 2: More PA experimentation

Our sub arrays hang behind our top boxes, and they’ve never been time aligned; again, it has been working great. I inserted another delay on the center cluster along with the L + R and we adjusted it a bit to fall in with the subs. This helped a bit, but I don’t think it made as big of an impact as pushing the center cluster back in time.

On Sunday I found out that this actually caused some problems. I got called into our video control room during the message because our pastor sounded a bit robotic. I quickly identified it as a phase issue. When I inserted delays on the L&R busses, I threw them out of phase with our spoken word subgroup we feed to the control room to give them more level. I have a couple ideas for a workaround this week, but this is exactly why PA alignment is best done by dedicated processors and not within a console…

Step 3: Backing the PA up to the drums on stage

This was tougher to do because I did it entirely manually, and I think I sort of stumbled on it accidentally. I would have loved to have used Smaart to do some measurements, but I just couldn’t figure out a quick way to do it. I did this in the Venue using the channel delays to back up the PA for our instruments ONLY; I don’t want to deal with sync issues with anyone else on stage and the channel delays gave me a relatively easy way to dial this in.

I started by having our faithful associate audio director whack the snare drum while I brought it up in the house to where the level from the PA was approximately the same as what was coming off the stage. Then I started turning the channel delay on the snare until the sound from the PA fell in line with the sound from the stage. Once I had it where I was happy with it, I multi-selected all of the music channels and punched in that delay for the channels.

During rehearsal I sat in the front row and using the wireless control I punched the delay in and out on all our music channels to A/B it. The difference was very noticeable. Without the delay, it sounded like a double snare hit with the sound from the downfills arriving before the sound from the snare on stage. Next time I dial this in, I might try doing this in the front row since they’re the ones who really benefit from this. I might also experiment with flipping the polarity on the snare channel and adjusting the delay until things cancel out; when I get the most cancellation, I’ll flip the polarity back–this should work in theory.

Step 4: Playing with the instruments

This was where it got very tricky. The idea was to have the drums delayed so that the bleed into vocal mics was aligned with the signal in the drum mics. The initial challenge was to avoid changing the delay times applied to the drum channels because those channel delays were putting the drums in time with the PA. Changing the channel delays here would throw the PA/drum alignment off. I spent a lot of time thinking about this, and I think I figured out a system that will work. I might try to put together some diagrams to show how this works at some point because I think that might be the easiest way to explain it, but in the meantime I’ll just try and over-explain it all.

The first thing I did was record a short section of the band into Pro Tools while they were still hashing out the songs. Next I measured the distance between a snare hit in the snare mic and a snare hit in the lead vocal mic. Armed with the timing offset, I took the vocalist channel and reduced the delay on that channel by the offset. Since I had already delayed all my music channels when I aligned the drums with the PA, I was now able to bring the vocal forward in time to line it up with the drums. Since the vocal mic “hears” the snare later than the snare mic “hears” it due to the distance between the two mics, reducing the delay time on the vocal channel brought the later snare sound in the vocal mic forward and in time with the close mic on the snare that was delayed to the PA. Are you confused, yet?

I did the same thing for the BGVs next. Now something I want to stop and touch on is a concern I initially had. I wondered if shifting music channels around might add weirdness since our band is on in-ears and they’re all playing to a click; I thought shifting the musicians around in time might screw up the tightness of the band. Most of the offsets between musicians were no more than 3-5ms, and I didn’t really notice a problem. However, since I can get anal about this kind of thing, I will be monitoring this in the future.

Next I approached the overheads to line them up with the snare. Again, I measured the offset between the snare in the overheads and the snare mic. I again reduced the channel delay on the overheads to bring them in line with the snare mic.

The results of all this were subtle. However, when we A/B’ed the aligned channels vs. our typical setup, the alignment definitely seemed like it tightened the mix up. The great part of it was that it didn’t take long to do. I could probably repeat the channel stuff in about five minutes.

I did investigate doing some more alignment within the drums focusing on the toms and hat, but when I measured things in Pro Tools the offsets were too close to really add; less than .1ms which was the smallest offset I could do since I had the Venue configured for milliseconds instead of samples. I’ll probably eventually take it that far, but I think we went far enough for a first time experiment.

This was the first time I’ve ever tried any of this, and I was very pleased with the results. I think the biggest yield was from pushing the center cluster back. Not only did this warm things up at FOH, but it also seemed to smooth our PA coverage out a bit and widen the FOH experience. I wasn’t mixing this week so it was a perfect week to try all of this because I could freely do a lot of walking around the room, and I was pleased that I didn’t find any spots that this seemed to create problems. Overall everything we did seemed to tighten the mix up. One of our A1’s who was attending this week commented on that after the service not knowing we had done anything different.

I think the channel stuff I did would have made a bigger difference on a loud stage full of wedges and backline. Since we use a drum shield to help with bleed in the vocal mics and keep our amps offstage, our stage volume is relatively quiet which gives us pretty clean inputs on the FOH console. My guess is this stuff could make a huge difference on a loud stage, so maybe when the Thirsty conference comes in May I’ll try and talk some of the band engineers into trying it out.

One thing I want to add is that when we upgrade our system we won’t be doing system alignment within the Venue. That’s something that really should be handled by dedicated processing, but Venue was the easiest way to experiment tonight. However, I will continue to do the rest of the alignment within Venue for a few reasons:

1.) Our sets change every 2-6 weeks which moves the drums around the stage quite a bit. Pushing the PA back using Venue instead of using the system processors gives me an easy way to adjust this.

2.) Doing it this way gives us an added layer of protection on our system processors. While I don’t want volunteers digging around with our system tuning, this is something I’m confident they can learn and adjust themselves in time.

3.) Venue gives us more control over only pushing select elements backwards in time. I don’t want to back up the sound for playback devices like video or any of the spoken word stuff that happens on stage.

I just want to close out by saying that this is advanced, mad scientist type stuff. This comes secondary to getting great sounds in a traditional fashion using good mics that are properly placed, tasteful use of channel EQ and processing, and a proper mix blend. Don’t worry if you can’t do this right now because if I had to turn it all off we could still get a great mix happening. The alignment stuff we did with instruments this week was very subtle stuff, but it’s exactly like Scovill described it to me: you do lots of little things here and there and eventually it all adds up. In some ways it is sort of a voodoo kind of thing where you wonder if what you’re doing really makes a difference. Then you turn it off and realize you like it better turned on. We could live without this, but while we have the tools at hand it seems like a worthy adventure.

powered by performancing firefox

Currently Reading:
Sound Systems: Design and Optimization: Modern Techniques and Tools for Sound System Design and Alignment

David Stagl

2 Responses to “Taking on the Space-Time Continuum

  • Good stuff Dave. I have thought for a while that many churches could use some expertise in time alignment issues. It seems that any large system needs careful attention in this matter, just from a system standpoint. When I first came on staff where I am now, the first day I spent some time just listening to the PA with some of my reference CD’s. I noticed some signigicant issues right away. The guy that did the design is a top notch designer so I gave him a call to get his advice. He came and listened and was shocked as he just didn’t think he would have left it that bad. We spent some time together working thru mostly EQ of the system. We also are doing LCR with the center just ahead of the LR boxes. Our initial work made a huge difference. Over the course of the next few months I continued to tweak things including examining the time alignment of the main boxes. What I found was a few places in the middle of the room where vocal inteligibility would drop out. A simple phase issue between the center and left-right. I had my assistant listen while I adjusted the delay of the center to line it up with the LR and bingo, everything cleared up. My adjustment was only about 0.5 ms, using the BSS Soundweb that is handling all the processing. I know I was rather surprised that such a small adjustment made a noticable difference. I did then check all the delay boxes under the balcony and such as well and made a few adjustments. Since then the complaints of not hearing vocals have all gone away.

    I hope to move to a digital console soon so I can do some of the band things you have described. I can see the value in it for sure.

  • My first experience with small shifts in time causing large amounts of phase trouble was working with Pro Tools in the studio. Latency on the channels from adding plug-ins could make an enormous difference and we’re talking a few samples here and there. These days, you can get Pro Tools to compensate for all that, which is nice, but when I was editing on Pro Tools for a living you still had to do it all manually.

    System optimization and alignment is definitely my weakest area right now so I’m going to focus quite a bit on that over the next year to build my chops. The challenge working in a permanent install is that this is usually set and you don’t have to touch it much, but it’s a skill I still need to develop more.