fbpx

Year of the System: Lost in Translation or How I Started Looking for the X Curve

So here’s one for the rest of you system optimization junkies like me. I’m trying to solve a dilemma so I don’t have a lot of answers here, and I’d love some input via the comments section. This is just something I’ve been thinking about a lot lately. The basic issue is that I haven’t been satisfied with the way our audio for video has been translating in our big room when it has primarily been mixed in little rooms, and I’m wondering if there might be a better way to optimize our systems for this. This might ramble a bit so bear with me.

The whole thought process really started when I was talking with Tom Petty’s FOH guy, Stewart Bennett. As I’ve been building my optimization chops using an FFT such as SmaartLive, I’ve been optimizing systems to get a linear response. For those new to optimization and without going into the nitty-gritty on what FFT’s are, this basically means I’m optimizing my system so that that what we hear in the room is a linear representation of what the console is spitting out; this way everything we do on the console translates to the system giving the FOH engineer complete control at the console. This is the method that Robert Scovill is a big proponent of, and I would say I’ve had a lot of success with it. However, I’m still wondering if there could be a different approach worth trying even though everything we mix live in the room meets our expectations of our systems in their current state.

When I got to hang out with Stewart Bennett, we talked about a lot of different optimization curves that some other heavy-hitter FOH guys like to employ on their systems. Without going into details, some guys essentially make some sort of compensation for Equal Loudness contours which almost always entails some kind of cut per octave happening above 1k. Some of these approaches were also shared by some of my church audio geek friends. I am always up for experimenting, but there is definitely an element right now of “it ain’t broke” in the system so my plan was to hold off until we upgrade before I start experimenting again.

So fast forward a bit to this whole audio-for-video translation issue. Thinking about a different approach to system tuning got me thinking about something called the X Curve which is an optimization curve that movie theaters use. You can do some research of your own if you want to find out about it, but the X Curve basically looks eerily similar to some of the curves described in my discussions with other sound reinforcement guys.

Thinking about the whole crux of what we do, sound reinforcement in a modern church can really be a unique venture compared to a lot of other environments. I don’t know about your church but within one event or service ours can be required to meet the needs of a rock concert, movie theater, and college lecture in under 30 minutes. Just within these 3 elements are 3 different SPL level needs with almost 30 dB of dynamic range between the average programming SPL’s. In an effort to create a live music experience on par with a concert experience, a lot of emphasis these days gets put into optimizing systems for this, but I’m wondering if the rest of what we do might be suffering as a result and if there’s a way to compromise optimization to create the best experience for everything we do. Ultimately, my goal is to get things that are mixed in the big room for a service or event to translate to a smaller medium such as a video control room or hallway, and then vice versa. And while there is a lot of discussion on the net about room size and the psych-acoustics involved along with the difference between mixing on nearfield monitors vs. mid or far-field as it pertains to the post world, nobody seems to want to stamp a systematic approach to setting up for this.

So here are some questions for discussion from some of you other guys who like to play with this sort of thing and have more experience in this area than me. Is there a one-size-fits-all optimization curve that you like to use to get all programming types in the ballpark? Is it better to optimize the system to be linear(flat) and then to make adjustments to individual programming? What have you found works best in your rooms? Does the dynamic nature of what we do wihtin a church service make mix translation outside of our bigger rooms ultimately unrealistic?

David Stagl

7 Responses to “Year of the System: Lost in Translation or How I Started Looking for the X Curve

  • Jeff Seymour
    16 years ago

    “The whole thought process really started when I was talking with Tom Petty’s FOH guy, Stewart Bennett.”

    That was awesome.

  • Well, I’m definitely not more experienced than you. But I can tell you that we use a linear curve with a slight roll-off above 10kHz. That seems to work for a variety of live audio purposes.

    Anything we do for video/radio is recorded pre-EQ, pre-dynamics to allow us to completely customize that each target audience. I really believe in EQing mics as little as possible for the video. The MKE-2 we use is good to go. The Shure handhelds do pretty well flat too. The e6i needs so touch-ups, though.

  • Our services are recorded live to DVD. Nothing special, just the console outputs going to the audio inputs on the video recorder. And listening back to the DVDs recorded, I’ve noticed things that do not translate as well when I’m watching on video, like high and low frequencies. So I went back and applied the change I would have done in mastering the video, to my mixes. Then when I was walking around the room to see how my mix translated across a large area, it sounded much better.

    So maybe you should try listening back to several of your service recordings and make an average mastering EQ on all of them, then take that and add that to your current room EQ.

    Or, I’d opt for flat response, and then make individual adjustments.

  • R Clark
    16 years ago

    I have always found that a flat response is the best way. It kind of gives you a starting place when you go to run sound check. If you have a flat system first then you can adjust each individual channel accordingly. Sometimes I use a c weighting to get a little of the equal loudness curve. But the C weighting helps out with the curve on the top end when it comes to the equal loudness curve. I have heard of guys that actually boost everything above 4K to compensate for C, A, or X curve drop in the high frequencies. I don’t recommend this since you will rip some heads off.

    In all of the recordings we have done it seems that there is a real lack of low end transposed to the DVD. An Associate of mine came from the recording world and he takes the mix from that usually goes to DVD and “masters it” Dropping overall problem frequencies and boosting some others. It really bring out a lot in the recorded mix. You might be able to adopt this philosophy when it comes to your matrix output to the VTR’s. Compress it a litte and eq the way you feel the DVD overall needs to change. I don’t think that you can ever get a true mix when it comes to a live venue vs. a studio. two different beasts with too many variations.

  • This reminds me of a scene in the recent movie “Once.” The band records a bunch of songs in a studio where the tracks are also mastered. When they were done, the engineer said “now let’s give it the [crappy] car test. So they all got into the engineer’s junky car and listened to the new mix tape to see how the song came out when played in that environment.

    All-in-one seems like the holy grail that doesn’t exist because it’s best that it doesn’t exist. Seems like the question to ask yourself, is “how good is good enough?”

  • Phillip Graham
    16 years ago

    Dave,

    Hey again from Midtown. My thoughts on “non-flat” eq curves below. But first, its important to understand where the X-curve came from. The X-curve (curves really) was a result of trying to compensate for the additional high frequency absorption in the reverberant field of large rooms, when measuring the in room response with a time-blind RTA analyzer.

    Since the integration time constant of the RTA was always long enough to include reverberant field effects at all frequencies, the natural tendency of large rooms to have more air and/or wall absorption, or diffusion events leading to absorption, the in room response with a flat RTA curve was far too bright.

    Its also important to remember that the x-curve is also applied in the mixdown space, as well as the playback space, so in a sense is completely factored out of the mix process. I think the X-curve was an elegant solution for the measurement limitation of the day, but it is no longer really necessary.

    —-

    Now, what are my thoughts on non-flat equalization curves? Again, this is not a simple process, because with the x-curve above, the degree to which the room influences the balance is going to depend on the nature of the measurement system involved. With modern analyzers, you can look at the time domain, and include as little or as much as you feel appropriate.

    In the case of Smaartlive, the fixed point per octave (FPPO) process is elegant because it provides short windows at high frequencies, essentially hiding the room’s absorption effects, but uses longer windows in the mids and lows, including more of the room effects. So a “flat” curve in the typical Smaartlive situation is excluding the HF absorption effects that plagued the RTA analyzer used with the x-curve.

    Now, if you are still along for the ride, the power response of real loudspeakers is not constant with frequency, because their directivity is not constant with frequency. For the on axis response of a system to have balance, the low and mid power response must be greater, because a lot of that energy is spilling around the box, and radiating out in directions other than the axial direction. So, if you were to apply heavy time windowing to the low end response of the loudspeaker, essentially only looking at the direct field response, and THEN only look at the axial response, you could end up with a substantially low/mid tilted global frequency response balance.

    The FPPO response windowing in Smaartlive includes more of the room in the range where the room’s acoustic behavior matters the most, and in the range where real-world audio systems have less forward directivity control. This, I think was, Sam B’s best insight when he created Smaart.

    —-

    So, with discussion of how the measurement system will influence the perception of the total system transfer function, back to the original question of the non-linear system response curve. One place that I feel the system response should almost always be “non-linear” is in the range below about 100-120hz. Sound systems tailored for an extra 6-10dB of output in the lows are almost always appropriate for modern mixing. The trick to this extra low end is to have it smoothly transition over the 60-120hz octave, so it does not become boomy or muddy.

    Now, for the rest of spectrum, I think the system response behavior can be tailored towards “very linear” at low to moderate volume levels. If the system is going to be used at 95dBA or below, I prefer to leave a linear transfer function and use the mixer to compensate for any overly bright instrument sources. Above 95dBA (or so) I start to feel the need to apply additional shaping equalization to the baseline “flat” transfer function. If I don’t do this additional shaping equalization, I feel like I am using most of the the board eq to do the same types of equalization on most every channel.

    Shaping equalization typically goes about like the Raphson-Dodson or ISO equal loudness contour curves, which have a broad smooth increase in sensitivity from about 1khz to 8khz. In general I start at 4khz, and spread out to each side. Above a certain HF cut level I find the octave centered around 400hz may need some minor tweaking to not be “honky”.

    Above 8khz, i like to leave the sparkle. That can mean anything from a few dB shelving boost, to a few dB of cut depending on the room/system. Also, it seems that some systems have one particularly prominent tone in that top octave, so sometimes a single parametric cut up there is in order. Play some pink, sweep the top octave, and see if it clears out a “spike”.

    Ideally the above equalization could be implemented with something like a BSS 901, so that the cuts were removed at moderate levels, retaining a “flat” curve at lower levels. I definitely think dynamic equalization has a place in system tuning for live sound applications.

    Whew, pretty long!

  • Well, I played with this a bit this week. I initially did about a 3dB per octave starting at 1k up to 8k. So 2k is down 3 dB, 4k down 6 dB, and 8k down 9 dB. After rehearsal, I ended up splitting the difference so it was probably a pretty slight roll-off; I didn’t measure it when I did it.

    I will say this, when I had the PA set up in non-linear fashion from 1-8k, I found a lot of what I was doing on individual inputs more closely resembled what I do in the studio–a theoretically linear environment. But, I haven’t had a chance to listen back to Sunday’s board mix to see how that translated. I was very happy, though, to need less deessing and much less EQ on our video playback channels.

    One last point is that putting a slope into the system tuning was not easy. I find it much easier to tune a system in a linear fashion, but there can be benefits to adding a slope in. I’m just not sure at this point if it’s worth the trouble.