# Sticky  Gain Structure for Home Theater: Getting the Most from Pro Audio Equipment in Your System



## Wayne A. Pflughaupt

*Gain Structure for Home Theater 
Getting the Most from Pro Audio Equipment in Your System*​

Part 2: Is a Pro Audio Gain Structuring Procedure Relevant to Home Theater? 
Part 3: Consumer vs. Professional Signal Levels and Relevant Noise Floors 
Part 4: Why Gratuitous Signal Boosting Won’t Increase Dynamic Range  
Part 5: Myths and Legends Concerning Digital Equalizers and Processors 
Part 6: About Professional Amplifiers 
Part 7: How to Determine if Your AVR will Drive a Professional Amplifier  
Part 8: How to Determine if Your Equipment Maximizes Dynamic Range 
Part 9: How to Perform an Optimal Gain Structure 

Downloads for Gain Structure Reference Test Signals ​

Back in the days of stereo, things were pretty simple. The typical hi-fi system was anchored by an AM/FM receiver, a pair of speakers, and source components such as a turntable, cassette deck and/or CD player. Serious audiophiles went with a separate pre amp and power amp, and perhaps a few more source components, but that was usually as complicated as things got. 

Today many home theater enthusiasts use an outboard equalizer between their receiver and manufactured subwoofer, and those with high-performance DIY subwoofers may add a high-powered amplifier as well. Some ambitious hobbyists even go so far as to build their own active speakers, with a dedicated amplifier channel for each driver and an electronic crossover or digital speaker processor handling the frequency-dividing between the tweeters and woofers.

Due to limited demand, components such as crossovers, equalizers, speaker processors and high-powered amplifiers are virtually non-existent in the product lines of home audio manufacturers, so many enthusiasts turn to the professional audio market to find solutions to their needs.

However, pro gear is a different animal from home audio equipment, and indeed something of a mystery to many home theater enthusiasts. Parallel mode; gain controls; switchable limiters and filters; MIDI; AES/EBU; balanced or unbalanced connections that use TRS, TS, XLR, Phoenix, or barrier strips - small wonder that a certain amount of misinformation floats around the home audio forums on how to integrate professional and consumer components into a cohesive system. 

One of the significant differences between home and pro audio equipment is their respective mean operating signal levels. Specifically, the industry standard for professional gear is considerably higher than that of home audio components. So naturally whenever the subject of pro gear comes up on the home audio forums, the issue of mismatched signal levels usually enters the discussion as well. Often it’s not a pretty picture – a lot of conflicting and unreliable information, frequently given by ultracrepidarians with minimal experience in professional audio. 

Level-matching is an issue in the pro-audio realm as well, since there are differences in headroom capacity and noise floors in each piece of hardware in a professional sound system. Compensating for these differences in the signal chain in order to maximize overall system performance is known in the industry as *gain structure*. It would seem logical, on the surface at least, that if we’re going to include professional gear in a home theater system, gain structure is something we should be concerned with as well.


* What is gain structure?*
Thanks to the Internet, it’s not a problem digging up information on gain structure as it relates to the pro audio field. But you will probably find that the more you read up on the subject the less sense it seems to make, especially trying to figure out where home components fit into the picture. Don’t feel alone; I started researching this piece more than three years ago but kept running aground on that very point.

It certainly doesn’t help that the material is all over the map; e.g. there often isn’t a lot in common from one article to the next. This is because different essays cover the subject as it relates to different fields, disciplines or applications – digital recording, DSP devices, live-sound PA systems, even car audio. Fortunately, we can narrow down the focus: since a home theater is a type of sound reproduction system, the material on PA systems most closely relates to our application. You can ignore the rest. (Hopefully none of us are using any car audio gear in our home theaters.)

The objective of a properly-executed gain structure in a PA system, according to most thinking on the subject, is to align the signal-level (gain) settings of all components in the system so that they reach distortion at the same point. The graphs below show various pieces in a typical PA system before and after a gain structuring process; note the differences in headroom and noise floors the various components have. After appropriate gain calibration, the system’s dynamic range increases considerably.










*PA System Before Gain Alignment*
Courtesy of ProSoundWeb










*PA System After Gain Alignment*
Courtesy of ProSoundWeb​

When it comes to adding professional gear to a home audio system, the fly in the ointment is their vastly different signal ranges. The mean operating signal level of pro gear is +4 dBu, while the average for consumer equipment is -10 dBV. The dbV and dBu values aren’t readily interchangeable, but they can be converted to a common standard, Vrms. The consumer -10 dBV translates to 0.316 Vrms, while the professional +4 dBu translates to 1.228 Vrms. So you can see that the professional signal reference is over four times higher than the consumer. In real world use this translates to a difference of 12-14 dB (depending on which information source you trust).

Since consumer and pro equipment have such a tremendous difference in average signal levels, it should be immediately obvious that the chance of achieving a textbook pro-audio-styled gain structure in a mixed system is nil. Nevertheless, a cadre of home audio enthusiasts presume (beyond reason) that a paradigmatic gain structure is attainable in a mixed system. It merely requires keeping signal levels as high as possible throughout the signal chain. This, we are told, is the way to optimize dynamic range and signal-to-noise levels. Here are a few typical examples lifted from various home theater forums:



> The equalizer needs an input of +22 dBu (9.75v) in order to use its entire dynamic range. Anything less diminishes dynamic range and lowers the final signal-to-noise ratio. If you can't get that high a signal you need to add a line level converter.





> You need something like the Samson S-Convert to get the levels up.





> There aren't that many pro amps that can work for home theater, because pro works on +4 signal levels.





> As you can see when you don't use a full-strength signal, the dynamic range available above the noise floor is lessened.


Unfortunately, the maximum-levels canon is ill-advised and naive. The truth is that going to extraordinary lengths to max out the low-level signal fed from a home theater AVR to downstream professional gear, especially by using external signal-boosting devices, is often unnecessary and can even be _detrimental_ to achieving optimal dynamic range. As it turns out, gain structure suitable for a mixed pro/consumer system is relatively easy to attain and seldom requires anything in the way of extraneous signal manipulation.

It would be nice if this piece could be wrapped up with that statement, but I expect it will not satisfy our more technically-inclined readers. Unfortunately, it’s a long and torturous path arriving at that conclusion, so strap yourself in. I apologize for the length of this, but I figure if you’re going to scorch the “max level” sacred cow you’d better have the documentation to back it up. Plus I felt those inexperienced in audio, especially pro audio, would benefit from and indeed appreciate detailed explanations in “plain-speak.”

However, those who aren’t interested in the “whys and wherefores” and just want to know how to gain-structure their mixed system can skip to Part 9, and perhaps Part 7 for info on determining if their AVR will drive a pro amp.


----------



## Wayne A. Pflughaupt

*Part Two

Is a Pro Audio Gain Structure Protocol Relevant to Home Theater?*​

*When two worlds collide*
The idea of maximizing signal levels in a hybrid signal chain is based on at least a couple of misconceptions, one being that high levels are what professional digital processors require in order to operate properly. That will be fully addressed in Part 5 of this piece. For now we’ll focus on the broader misconception, which assumes that high signal levels are what gain structure protocol in general requires. While that may the case with PA systems, I submit that where home audio is concerned it’s a misinterpretation of the available subject-matter material. 

One of the better articles on the gain structure topic, which deals specifically with live-sound PA systems - again, the material-type most relevant to home audio - is Rane’s Setting Sound System Level Controls. In it you will find this passage:

_”Summary: Optimum performance requires correctly setting the gain structure of sound systems. The proper method begins by taking all necessary gain in the console, or preamp. All outboard units operate with unity gain, *and are set to pass the maximum system signal without clipping.”*_

“Maximum signal” – says it right there, doesn’t it? Not so fast.

What is not considered is this: Rane’s article, like every other you’ll find on gain structure, assumes the sound system is using professional gear from top to bottom. That’s right - _there is no mention or inclusion of any kind of consumer equipment in the signal chain._

Why is that important? If home audio equipment was being used in PA systems, it would have to be accounted for in any essay on gain structure. Would it not? 

Unfortunately, you will not find anything on gain structure that includes home equipment in the equation. This leads us to the inevitable fact that *the available subject-matter material simply does not - and indeed cannot - translate well to a mixed pro/consumer system.* And this is what the “keep the levels high” advocates have overlooked.

At least a couple of relevant examples can be cited to prove the point.

For one, it is noted elsewhere in the Rane article that in a live sound situation the primary concern is not dynamic range, but headroom. The reasoning offered is that dynamic range is mainly determined by ambient noise levels in a venue, such as the HVAC system and crowd noise, and therefore is beyond the sound engineer’s control. However, headroom _can_ be controlled. And headroom is critical in a live situation, because persistent clipping and distortion throughout a show is the “kiss of death,” virtually assuring the sound company that a paying client will be lost.

This is one-hundred-percent _backwards_ from our priorities in a home theater. For us dynamic range is critical, since our rooms have relatively low ambient noise levels. Meanwhile, headroom is a practically a non-issue, easily achieved with all but the most inexpensive equipment.

Another example: The Rane piece notes (as does all other subject-matter material) that proper gain structure begins right up front in the signal chain, by adjusting the level for each input to the mixing console (which is the equivalent of an AVR or pre amp in a home system). The situation in live sound is that everything plugged into the console has a different signal strength. For example, some mics deliver hotter signals than others; the variance from one make and model to the next can be pretty extreme, especially with phantom-powered vs. passive mics. The line outputs of a CD player (that might be used to provide background music before a show starts) will be a hotter signal than most microphones. Guitars with active (battery-powered) pick ups will have a hotter signal than those with passive pick ups. And so on. The idea is to use the gain control provided for each input of the mixing console to maximize each source’s signal-to-noise properties.










*Simple audio mixer with input gain controls*
Courtesy of Alesis, LP​

Once again, this exercise has absolutely no relevance to home audio systems. There is no need for tweaking mismatched levels of the various source components connected to our receivers or pre amps. Why? Because _ there are no mismatched levels._ Consumer equipment has a fairly universal standard for signal levels from top to bottom. It extends from the source media, to the components that play back the media, and on down the line. This is why you virtually never see input gain controls on receivers or pre amps, and only occasionally on amplifiers: It’s taken for granted that they will never see a huge variation in signal levels, such as a +26 dBu signal from a professional mixing console, for example, or a few-millivolt signal from a passive microphone. 

Now that we know why the reference material on gain structure doesn’t translate to residential audio systems, we can turn our attention to examining other issues that discount the relevance of the maximum-signal edict.


----------



## Wayne A. Pflughaupt

*Part Three

Consumer vs. Professional Signal Levels and Relevant Noise Floors *​

In order to understand how an ideal gain structure is accomplished in a mixed pro/consumer system, with or without an outboard boosting device, one must fully comprehend and recognize the dynamic range and background noise capabilities of professional and consumer equipment, as well as their nominal operating ranges. We’ll begin with the latter.

*Understanding consumer vs. professional signal levels*
To get a better appreciation of how consumer vs. pro signal levels relate to the equipment involved, we can turn to the popular Behringer DSP1124 and FBQ2496 Feedback Destroyer (BFD) equalizers as an example. 

Anyone who has used these equalizers knows they have rear-panel switches to accommodate either professional +4 dBu or consumer -10 dBV nominal operating ranges. Both ranges can be thought of as an _internal gain structure._ Since there is a 12+ dB difference between the two, it can be difficult to optimize a single gain structure for both ranges. In other words it’s hard for a processor to deliver, at the same time, the best headroom and the lowest noise floor from a single internal gain structure. The higher-level (professional) +4 dBu range will have the best headroom, while the lower (consumer) -10 dBV range delivers the quietest noise floor. (Side note: The range settings on a SPL meter operate the same way, providing a balance between the mic pre amp’s noise level and the expected volume of the signal being measured.) 

Fortunately, electronics technology has improved to the point that pro manufacturers can largely stick with the +4 dBu gain structure and maintain an acceptable noise floor. This is why, aside from the BFD and similar low end pieces, you seldom see +4 dBu / -10 dBV switches on pro audio processors anymore.

The graph below shows the relative difference between the operating ranges of professional and consumer audio gear. However, the thing to keep in mind about -10 dBV vs. +4 dBu is this: They are not “fixed” or steady-state output figures. They are _average _ (or mean) signal levels. In actual use signal levels swing far above and below those figures, in both consumer and pro systems. For instance signals from a pro mixing console can get as high as +26 dBu. And signals from a home receiver, pre amp etc, can peak above +4 dBu.










*Comparative Professional vs. Consumer Reference Levels*
Courtesy of Rane Corporation​


*A progressively shrinking noise floor*
Now that we're familiar with pro and consumer nominal gain structures, we can explore their relationship to signal-to-noise and dynamic range. 

Signal-to-noise relates to the _residual or inherent noise floor_ a component will have with no signal present, as you would have from an unused input on your receiver. This is referred to in home audio specifications as _signal-to-noise ratio,_ or simply S/N. Generally, what a spec’d figure of 95 dB (for example) means is that the component’s noise floor is 95 dB below its maximum output.

By comparison, dynamic range refers to the “distance” between a component’s inherent noise floor and, in the case of a pre amp, the maximum signal it can generate before distortion. Or in the case of a processor connected between a pre amp and amplifier, the maximum signal it can pass without distortion. 

The two are necessarily related; obviously a component with a poor S/N ratio (excessive background noise) will necessarily have less dynamic range because its noise floor is higher. But on the other hand, two components with identical noise floor levels can have different dynamic ranges, if their capacities for _high-level_ signals are not the same.

Since dynamic range is inseparably linked to background noise levels, it stands to reason that one cannot hope to maximize the dynamic range of a hybrid home theater system if the residual noise levels of the _individual components_ is a “big unknown.” Each component inserted into the signal chain has the capacity to add noise and/or reduce dynamic range and headroom. But this can be tricky to ascertain. Anyone who has looked up the noise specs for pro and consumer gear has probably noticed that it’s not immediately obvious how to compare them. Pro gear typically uses a specification called “noise” or “hum and noise,” while consumer gear typically uses the signal-to-noise ratio spec we’re more familiar with.

A pro audio “hum and noise” spec of 96 dB may appear to be virtually equivalent to a consumer 100 dB “S/N” spec, but it isn’t. What’s not widely known is how these figures are derived. With pro gear, noise specs are referenced to the nominal +4 dBu level (IEC 60268-17, although sometimes 0 dBu in Europe). Strangely, there is no standardized reference with consumer audio equipment, but -10 dBV or -6 dBu are commonly used. In both professional and consumer equipment, the specifications are calculated with no input signal.

So remembering that the -10 dBV internal gain structure delivers the quietest noise floor, it would seem that consumer gear in theory _should_ be much quieter than even the best professional gear. The graph below compares the noise floors of consumer and pro audio equipment. The overlaid box showing the consumer signal range has the same "dynamic range" (i.e. peak-to-noise-floor distance) as the pro signal range “S/N 95 dBu” indicator.










*Comparative Baseline: Professional vs.Consumer Noise Floors*​

However, in the real world there is not the tremendous difference in the background noise levels of home and pro gear that logic would suggest. The reason is that other factors come into play that “whittle down” the advantage that home gear theoretically should have. 

For instance, aside from the differences in reference levels, there is another glaring disparity in how consumer and pro-audio noise specs are calculated. Look closer and you’ll see the dB figures quoted usually have additional “qualifiers” attached: dBu (unweighted) for pro audio, and IHF A-network for home gear (also rendered IHF-A weighted or simply A-weighted). The IHF-A qualifier the consumer industry typically uses is an A-weighted spec, which is a less-rigorous standard than the unweighted spec most pro audio manufacturers adhere to. As we can see in the graph below, an A-weighted curve rolls out the upper and lower frequencies. By comparison, an unweighted spec is a flat-response reference with no roll-out of the bottom or top end. 










*A-Weighted Response Curve*​

The problem with using an A-weighted curve to obtain a signal-to-noise spec, obviously, is that it “ignores” upper- or lower-frequency noise that may be present. So if a component happens to generate a bit more hum than it should, and this problematic component can only muster an unweighted spec of 88 dB, A-weighting would allow the manufacturer to “honestly” bump the figure up to a more respectable 94 dB. This is merely a hypothetical example of a component that’s only a bit worse than it should be; in reality A-weighting can “improve” a noise spec as much as 10 dB.

So we can see that the A-weighted reference home audio manufacturers use can substantially narrow the noise-floor advantage that home gear should have over pro. The graphs below compare the theoretical noise floor of home equipment (previous graph) to a more realistic A-weighted floor.

















*Consumer Baseline vs. A-Weighted Noise Floor*​

And it gets worse. The background-noise advantage home theater receivers have suffers additional loss when the receiver is not in a “straight” or “bypass” mode. Engaging digital processing adds noise, which diminishes the S/N advantage of an AVR even further. The Rotel RSX1057 is a rare receiver that lists S/N both in analog bypass and with Dolby Digital and DTS engaged, and the processed spec is 3 dB worse. Imagine what equipment with a lesser pedigree will deliver.

















*Consumer Baseline vs. A-weighted + Digital Processed Noise Floor*​


----------



## Wayne A. Pflughaupt

*Part Four
Why Gratuitous Signal Boosting Won’t Increase Dynamic Range​* 
*Two scenarios for signal boosting*
We’ve established that for all practical purposes the background noise levels of most home theater receivers and pre amps may be only marginally better than that of good-quality pro gear. That knowledge will help us understand why artificial signal boosting using devices like the ART Clean Box or Aphex 124-A won’t necessarily increase dynamic range or improve the noise floor. Let’s drop a pro audio processor into our low-level signal chain and see what happens. 

There are only two possible scenarios. The first assumes that your AVR or pre amp is a first-class piece of equipment that has a lower noise floor than the pro-audio processor you’ve added to your system. The situation here is, if you’ve introduced a processor to your signal chain and it has a higher noise floor than the rest of your system, then quite naturally _its noise floor now determines that of your entire system._ Noise levels can never be lower than the weakest link in the chain, and boosting the output of the quieter AVR is not going to change that. If a processor gives you objectionable background noise with the system at idle (i.e. no signal present), then that’s just the way it is. _There is no signal-level manipulation you can do that will change that._ You’ll either have to live with it or get a better processor (which is exactly what this fellow found out).

Nevertheless, the persistent audiophile may desire every bit of dynamic range he can squeeze, possibly by using a signal booster. But remember, even the high-end AVR or pre amp may not be tremendously quieter than the processor. You’re not merely kicking up the AVR’s maximum signal. You’re actually shifting its _entire operating range,_ including its noise floor. Start boosting the AVR’s output by extraneous means and at some point _its noise floor is going to exceed that of the processor._ Due to its +4 dBu gain structure, the headroom capacity of the pro processor is far greater than that of the typical AVR. It’s nuts to think you can boost the AVR’s signal to match the processor’s peak capabilities without realizing a penalty at the other end: an increased noise floor. If your artificial boosting results in a higher noise floor, you have _lost_ dynamic range, not increased it.










*A-weighted + Digital Processed Consumer Noise Floor, Boosted*​

In addition, there is no automatic benefit to sending a hotter signal to the amplifiers downstream from the processor. If they are professional amps they have their own gain controls and don’t necessarily need the hotter signal (more on this in Part 6). But home audio amps, if you’re using them, will easily clip with even a low volume-control setting on the receiver that has had its output signal boosted. If you have to limit your AVR volume setting because you’re clipping your amplifier’s inputs - well, that kinda nukes any "dynamic range" the signal booster got you, doesn’t it? 

The second scenario assumes just the opposite of the first, that your AVR or pre amp is not as quiet as the pro audio processor. Well, it should be obvious that boosting the signal level of a _noisy_ component is going to do nothing to increase a system’s dynamic range. 

The conclusion should be apparent: Since dynamic range is the “distance” between the maximum undistorted signal and the noise floor, in a mixed pro/consumer multi-component system these factors will be determined by the _weakest links in the signal chain._ The pro audio processor will generally determine the noise floor, while the AVR will determine the maximum signal strength. 










*Comparative System Noise Floor / Signal Output*​

Yes, it’s true that the home theater signal chain does not utilize the full peak-signal capability of a pro processor. But that doesn’t mean we’re limiting the processor's dynamic range. It only means we don’t _need_ all of it.

(It should be noted that noise issues pertain mainly the main channels, not the subwoofer signal chain.)


*But what about all those gain structure articles that say maximum signal strength is mandatory?*
Now that we have some background, we can re-visit Rane’s Setting Sound System Level Controls article and see how such material is misinterpreted for consumer audio applications. Here again is the passage quoted previously in Part 2: 

_ All outboard [processors] operate with unity gain, and are set to pass the maximum system signal without clipping._

Please notice, the passage merely says that the processor should be able to _pass_ the maximum signal strength. That is not the same thing as saying the processor must _receive_ the absolute maximum signal it can take. 

This passage from elsewhere in the Rane paper sums things up nicely:

_It is the [mixing] console's (or preamp's) job to add whatever gain is required to all input signals._ [NOTE: As established previously, adjusting input signals is a non-issue with home audio source components.] _After that, all outboard compressors, limiters, equalizers, enhancers, effects, or what-have-you need not provide gain beyond that [which may be] required to offset any amplification or attenuation they may provide [as a result of their specific function].

Again, the rule is to maximize the S/N through each piece of equipment, thereby maximizing the S/N of the whole system. And that means setting things such that your _ * maximum system signal goes straight through every [processor] without clipping.* 

In other words, the system pre amp is what ultimately determines signal strength, and as long as downstream processors can pass _the highest-expected signal without clipping,_ maximum system dynamic range and S/N has been achieved. Even if the “highest-expected signal” is low-level from a home theater pre amp. 

When all is said and done, it’s as simple as that. Feed a signal from a consumer pre amp to a professional processor and you can expect it to show low meter readings. That’s just the way it is. 

And despite the rhetoric that floats around about what a bad thing that is, our next section will show that low signal levels to processors, even digital processors, actually do not matter at all.


----------



## Wayne A. Pflughaupt

*Part Five
Myths and Legends Concerning Digital Equalizers and Processors​*
Ever since the popular Behringer Feedback Destroyer (a.k.a. BFD) attracted the attention of home theater enthusiasts more than a decade ago, the conventional wisdom has been that its input signal level should be as high as possible. This protocol _de rigueur,_ put forward by people (I expect) with more knowledge and/or training in electronics theory than real-life experience in pro audio, is supposed to insure full resolution of the equalizer’s A/D converters, which in turn will simultaneously maximize dynamic range and minimize background noise levels. 

As we shall see, this presumption is flat-out wrong. The fact is, modern 24-bit digital EQs and processors could care less about the signal levels they receive. They’ll work just as well with a consumer -10 dBV or professional +4 dBu signal. As a matter of fact, they’ll work perfectly fine with signals at the _low range_ of either gain structure. But you’d never know it from the advice that floats around the various home theater forums:



> With this in mind you need 0dBFS from a digital source to be input into the Behringer DCX2496 at 9.75 Vrms (+22 dBu) to get the entire dynamic range available on the medium. If you can't get that high a voltage you need to raise the DCX's input level, or add a line level converter.





> The point of the S-Convert [signal booster] is to get the analog level high enough to drive the digital DCX-2496 to near 0dBFS and get the most resolution out of the A/D converters and the highest signal-to-noise ratio. You can't get it back if you lose it there.





> Like any digital device, the BFD will offer the best results if you feed it a proper level that uses all the available bits. If the loudest signal only enables half those bits, then the quiet passages will be in the noise.





> As you decrease the input signal, the BFD’s noise will rise exponentially. Once the least significant bit and other low order bits are gone as a result of the low input voltage, the A/D converter is no longer a 24 bit device, but a lower bit device, producing higher noise figures.


However, after acquiring a BFD and spending some time with it, it was obvious to me that the (substantial) background noise it displays is _fixed_ and has no relation to the signal strength it receives. So, the prevailing wisdom didn’t seem to add up. 


*Where’s the knob?*
Looking for answers, I pored over the manuals of numerous professional digital equalizers and other processors, from cheap to ultra-expensive, all with 24-bit resolution (like the BFD), and could not find a single manufacturer recommending maxed-out input levels. Even more surprising: By and large _the manuals scarcely mention input levels at all._ Hmmm.

Furthermore, I noticed that it’s hard to find a late-model digital EQ or processor with any sort of provision for input level control. If these products need maximum signal strength to function properly, _why don’t they have an easy-access adjustment on the front panel?_ Hmmm?

Rane’s DEQ-60 1/3-octave and PEQ-55 parametric equalizers are possibly the only two recent-production digital EQs with both input level controls _and_ a recommendation in their manuals for setting them. The advice is to set the gain level a substantial _~10 dB below maximum_. Hmmm!

The manuals for the Behringer DSP1124 and FBQ2496 Feedback Destroyers are another rare pair that discuss signal levels, even if the hardware itself has no input-level provision. I would submit that the max-levels axiom we typically hear is a _misinterpretation_ of what the Behringer manuals actually say. Behringer advises that the level meters should be kept out of the red (clipping) _merely for the sake of not overdriving the analog-to-digital (A/D) converters._ What is _not_ stated is that levels are supposed to be pushed as close to clipping as possible as a matter of course, or that this is necessary for best performance. The manuals merely note that signals that are too low are undesirable. 


*Catching up with the times*
So - why the dearth of level controls on these devices or advisements in their manuals? Post a question on the topic of digital level-setting at professional forums like the ProSoundWeb or Tape Op Message Board, where you’ll find people who use this kind of equipment for a living and have first-hand knowledge of its progress over the years, and they’ll tell you that the maximum-signal advice is obsolete. It dates back to the early ’90s or before, when 18- and 16-bit A/D converters (or even lower - yikes!) were the norm in professional digital processors. Today it’s accepted in pro audio circles that contemporary 24-bit processors for all practical purposes function identically to their analog counterparts. 

By contrast, older low bit-rate processors had relatively high noise floors and did not resolve low-level signals very well, the latter of which is why they required input levels to be pushed as high as possible. And guess what? They included front panel gain knobs to quickly and easily accommodate that necessary function. As an example, the Yamaha DEQ7, the first professional digital equalizer to hit the market in 1987, had 16-bit converters and a dynamic range of only 86 dB. Note the prominent input knob in the picture below, just to the right of the power switch.










*Yamaha DEQ7 Digital Equalizer c. 1987*​


*Bit depth, quantization and dynamic range*
Unfortunately, the only place where you’ll still find people touting the “maximum levels” advice for modern digital processors seems to be the home audio forums. What apparently hasn’t registered in our community is that deeper bit rates have increased dynamic range to the point of making level concerns moot. 

With pulse-code modulation (PCM) sampling, the bit depth is what determines both dynamic range and signal-to-noise ratio. The “rule-of-thumb” relationship between bit depth and dynamic range is, for each 1-bit increase in depth, dynamic range increases by 6.125 dB. So, 24-bit digital audio has a theoretic maximum dynamic range of 147 dB (6.125 x 24 = 147), compared to 96 dB for 16-bit. Another benefit of increasing the bit depth is finer amplitude or voltage “steps” (a.k.a. quantization), which enables low-level audio signals to be more precisely resolved. 

There are a number of ill-informed reasons commonly cited for the necessity of peaking out the input signals of modern 24-bit digital processors. One claims that anything lower than the maximum input level means that you’re not using all of the digital bits available, which will mean a loss of resolution. However, it’s generally accepted in the professional recording field that once you’re above 16 bits, optimizing signal levels is no longer an issue. This is because a 16-bit waveform, which has 65,536 amplitude or quantization “steps,” is considered the threshold of what is acceptable for hi-fi sound, because at that rate the human ear can no longer detect quantization errors at low levels. While there may be some debate about that in audiophile circles, a 24-bit waveform has over *250 times *more amplitude “steps” - 16,777,216. It should be obvious that a 24-bit system has sufficient resolution to perform well above the 16-bit threshold, even with reduced input voltage.


*The case of the missing bits*
Another outdated claim is that low input signals will result in the loss of the “least significant bits” (LSB). The theory goes that once the low-order bits are lost, the A/D converter is downgraded to a noisier, lower-bit device. “Digital electronics may have gotten better, but the math has not changed,” an adamant maximum-signal supporter once claimed on a discussion thread. Okay then, let’s take a look at the math. 










*Least Significant Bit in a 3-Bit System*​

The first thing that must be realized is this: LSBs can’t possibly contribute to a reduction of an A/D converter’s bit depth. As you can see from the above picture, LSBs are merely a single step - the lowest step - in the voltage “ladder” that is quantization. _Bit depth_ is what determines quantization (resolution), not the other way around. Therefore a loss of LSBs, even a large number of them, cannot possibly downgrade a converter to a lower bit-depth (lower-resolution) device. A 24-bit system in particular can shed several hundred thousand LSBs from its 16,777,216 quantization steps and still be comfortably above - _miles_ above - the 65,536 quantization resolution of a 16-bit system.

Indeed, we can go even further. Each bit in a 24-bit system will contain 699,050 quantization steps (i.e. 16,777,216 ÷ 24 = 699,050). Now - compare that figure to the mere 65,536 quantization steps of a 16-bit system. It’s not much of a leap from there to determine that you can abuse the input levels of a 24-bit system to the point that, even if only _one_ of the original 24 bits “remains,” you will still have more than _ten times_ the resolution of a 16-bit system! 

Looking at it another way, the LSBs in a 24-bit system are a mere _1/16.7 millionth_ of the full-range signal, which means a single LSB is -147 dBFS. Certainly, most sane people can recognize that degree of error is inaudible. 

Any way you slice it, with a high-resolution digital system the least significant bit is exactly that: insignificant. Even a mathematically-challenged guy like me can see that.











*Potential System Noise Added by Wholesale Loss of Bits *​

It should be beyond obvious that high-resolution 24-bit systems have effectively obliterated input signal issues. A 24-bit waveform, even at -14 dBFS, will certainly deliver dynamic range figures and low-signal quantization performance in vast excess of 16-bit. (Remember, 12-14 dB is the approximate difference between consumer and pro gear levels.) In other words, there is no reason to expect that operating a 24-bit digital processor at 14 dB or even 25 dB below full scale is going to reduce a potential 147 dB dynamic range to something akin to a cheap cassette tape (yes I know, I’m dating myself).


*dBFS vs. dBVU*
As it turns out, -14 dBFS is actually just about the right level for a digital processor. This because the designers of professional digital gear have long tied the calibration of A/D converters to the output of traditional analog mixing consoles. This was necessary because digital processors were on the market long before digital mixing consoles (and it continues to be necessary because analog mixers aren’t going away anytime soon). Analog mixers use a signal reference known as dBVU (volume units), and 0 dBVU is the level where the mixer is at its optimal performance, delivering the least amount of noise with sufficient output to be far above the noise floors of any downstream processors. In contrast, digital gear uses a different signal reference, dBFS (full scale). As we know, a digital processor will hit its maximum at 0 dBFS and if pushed beyond that point will badly distort.

So, how do the dBFS and dBVU scales cross reference? Technically they can’t, because the digital peak scale is not equivalent to the analog RMS scale. But in the hardware-manufacturing industry there is an approximate consensus that a signal measuring 0 dBVU at the input of an A/D converter should come out of the converter somewhere in the -18 dBFS to -9 dBFS range. (Of course, there is some variance, depending on the output calibration of the analog device and the A/D converters used in the digital processor.) 

It should be obviously that the manufacturers are smart people who know what they’re doing. If they’re satisfied that calibrating 24-bit A/D converters in the -18 dBFS range is sufficient to provide ample headroom and a vanishing low noise floor for their equipment, there’s no good reason to second-guess them. It should be clear from the lack of advisements in their product manuals that there is no “optimum” level with 24-bit digital audio per se, as long as you are above 16 bits and below 0 dBFS. 


*Theoretic vs. real-world specs*
Another theory for maxing out input signals goes something like this, using the ubiquitous Behringer Feedback Destroyer as an example: Even though the BFD is a 24-bit device and should be capable of delivering a dynamic range of 147 dB, notice its background noise spec of 94 dBU (unweighted). That's barely 15 bits of resolution, which means that nine of the 24 bits are lost in the background noise. Since 15 bits is all the BFD really has available to define input signal levels, it’s imperative to supply it with the maximum possible signal strength.

In other words, “the bit-depth spec is meaningless, the noise spec is what really matters.” (Sidebar question: If a lower input signal means increased noise levels, then by pure logic shouldn’t the BFD be virtually screaming with background noise with _no_ input signal?)

In reality, at a certain point - probably around 20 bits - the A/D converter is sufficiently quiet that it no longer has anything to do with the component's overall background noise. In deed, what this theory fails to consider is that _the A/D converters are only one part of the circuitry,_ which also includes other components that can have an effect on noise. 

The fact is that “real world” realities force limitations on both the analog and digital side of specifications. For example, if 24-bit A/D converters were really able to deliver a dynamic range of 147 dB, they would have to be capable of resolving signals as small as one billionth of a volt! Naturally, they can’t do that. (This is why the previous picture graph showed a 16-bit and 24-bit A/D converters having about the same noise floors.) On the analog side, down in this range transistors and resistors produce noise just by having electrons moving around due to heat. So even if A/D converters _could_ be designed to resolve such low levels, the low-noise requirements of the other circuitry in the component - power supplies, balancing transformers or ICs etc. - would be so stringent that they would either be impossible to build, or too expensive. 

What is the result of these real-world limitations? Dig through those manuals I mentioned earlier and you’ll find that the best dynamic-range spec 24-bit processors can muster is between ~105-115 dB, _which is no better than the best analog gear._

If a 115 dB dynamic range spec is the best a digital equalizer can deliver, does that mean for all practical purposes we’re left with a 19-bit piece of equipment? Of course not. If that were the case, no one would have bothered to develop converters beyond 20-bit. 


*Vapor bits?*
We can further illustrate the fallacy of the “lost in the noise” theory with the Behringer DSP1124’s on-board gain-switching between the consumer -10 dBV and professional +4 dBu signal levels. 

Anyone who has connected the BFD in their full-range signal chain (i.e. for the main-channel speakers) can tell you that its background noise level jumps dramatically when you switch it from the consumer to professional setting. Are we supposed to believe that the equalizer suddenly loses in the noise, like some kind of vapor, a good number if its digital bits _with the flip of a switch???_ 

Of course not. In reality no bits are lost. The converters are merely _swamped_ by the residual noise generated by the BFD’s overall poor circuit design. “Swamped” is not the same thing as “lost.” The full bit-depth is there and will certainly deliver superior low signal-level resolution, even if other issues limit noise performance. Fortunately, we typically use the BFD in the subwoofer signal chain, where its noise is largely inaudible. 


*Headroom is more important*
So - now we know why the level-setting topic gets scant mention in the manuals of current 24-bit processors: It’s a non-issue. Even if real-world issues prevent a true 147 dB dynamic range, 24-bit A/D conversion gives us the luxury of more than 50 dB of slack before poor input signal levels start to take a toll on noise levels, the humble BFD notwithstanding. What the “maximum levels” advocates fail to understand is that deeper bit-depth converters extend dynamic range _downward_ rather than upwards. In other words, 24 bits is not louder than 16 bits, it’s _quieter_. 

Instead of running the BFD’s input levels all the way out to the top, it makes more sense to dial them back to allow for some headroom. After all, miles of headroom, afforded by excellent resolution of low-level signals, is one of the benefits of 24-bit audio. Headroom is important with the BFD because when its rear-panel switches are in the consumer -10 dBV position, it is possible for the subwoofer output of some home theater receivers to drive the meters into the red. 

Since the BFD has no on-board provisions for level setting, the common recommendation has been to drive the input signal to just below clipping, using a DVD with a bass-heavy sound track played at the highest volume setting you’d ever anticipate using. The problem with this advice is that DVDs can be unpredictable. For instance, the _Dark Knight_ Batman movie has bass levels far more severe than the “reference” DVD I had previously used to set up my system. The point is, you never know what a DVD is going to dish out. To arbitrarily pick something off the shelf as “the standard” and expect that nothing more demanding will ever come down the pikes is foolish. 

According to bench tests that have been performed on the BFD, _there is a mere 1/10-volt headroom_ between the point where the meter’s red LED lights up and clipping begins. So you certainly want to avoid the red LEDs. Leave some headroom and set the meter in say, the -10 to -12 dB range. If it’s good enough for Rane’s $800 digital EQs, it’s good enough for the $100 BFD.


----------



## Wayne A. Pflughaupt

*Part Six
About Professional Amplifiers​*

When it comes to adding pro gear to a home system, amplifiers seem to generate even more confusion than digital processors. Persistent topics on the forums seem to revolve around whether or not the AVR is generating a signal strong enough to drive the amp to its maximum output, and how to know if that is actually happening. 

First, it needs to be established that those knobs on the amplifier’s front or rear panel are not “volume” controls like what your AV receiver has. Neither are they “power” controls. They have no effect on the maximum wattage the amplifier generates. They are _input sensitivity_ controls, also known as “gain” controls. Although they’re sometimes called _attenuators_, that is not technically correct as they set the multiplication factor of the amplifier. That’s why they are “gain” controls and not “attenuators.”

As with the professional mixing consoles we discussed in Part 2, the amplifier’s gain controls regulate the level of the input signal. With the mixing console we saw that the gain knobs function to maximize signal-to-noise ratios and bring a relatively uniformity to a wide variance of input signal levels. With an amplifier, widely-varying input signals can be an issue as well, but the gain controls serve a different purpose. Their function is to match the level of the signal to the amplifier’s input stage. 

In the full clockwise position, the amp accepts the full signal strength it is receiving from the pre amp (e.g. the mixing console in a PA system, or the AVR in a home theater). As the knob is rotated counter-clockwise, the signal is padded down before it enters the amp’s input stage. This is why the accompanying nomenclature on the amp’s face usually reads negative dB values that increase with counter-clockwise (gain reduction) travel. However, reducing the amp’s gain setting doesn’t reduce the amplifier’s power output; it merely means that more input voltage is required to _get_ the full power output. 










*Amplifier Gain Controls*
Courtesy of Crest Audio​
With the gains properly set, the maximum output from the pre amp will just begin to initiate the amplifier’s clipping indicator. A pre amp with a strong (i.e. “hot”) signal output will require the amp's gain controls to be dialed back (counter-clockwise) to a low setting. On the other hand, a pre amp with weak output will require the gain controls to be raised (rotated clockwise) to a higher setting. The undesirable scenario would be a low amplifier gain setting coupled with a low input signal from the pre amp. That could result in the amplifier being unable to reach it maximum rated output, and naturally that would be wasting the amp’s potential.

The other important feature of a typical professional amplifier is the clipping indicator lights. Contrary to popular belief, they do not necessarily tell you if the amplifier is actually being overdriven. 

To understand that, it’s important to realize that an amplifier is actually a two-stage component: An input stage and an amplification stage. The input stage is a line level device that takes the incoming signal, regulates it to a usable level (via the aforementioned gain controls) and perhaps provide some digital signal processing (increasingly common these days). By contrast, the amplifier section is a fixed gain device - in simplest terms a fixed multiplier, outputting a higher voltage that is “X” times greater compared to the signal strength coming in (i.e. the gain, usually quoted in dB). The output of this stage is voltage and current into the speaker circuit, the relationship between them based on the load being driven (Ohm's Law, V=IR or more correctly for this scenario, E=IZ). 

The truth is, the amplifier stage hardly ever exhibits _its own_ audible distortion. If you say, have a comfortable input signal to your 8-ohm amp but bog it down with a 2-ohm speaker, it will probably sound clean right up to the point where it overheats and shuts down.

That’s not to say that the amplifier’s clipping LEDs are useless. The amp’s input stage can be clipped by a severe input signal, just like any other line-level device. If the inputs are clipping that means a square-wave signal is being sent to the amplifier section. The result is audible distortion, which can damage the speakers even if the amplifier section isn’t maxed out, voltage- and current-speaking.


----------



## Wayne A. Pflughaupt

*Part Seven
How to Determine if Your AVR Will Drive a Professional Amplifier​*

*More specs: Can’t live without them*
In order to know if your AVR can drive your pro amp, it is helpful to become familiar with more of those dreary specifications. For the receiver, the relevant specification is the _maximum output voltage_ of its pre amp section, while for the professional amplifier the specification we’re concerned with is the _input sensitivity._ 

Sounds easy? It would certainly be nice if it were. The problem is that not all consumer manufacturers publish output voltage, and when they do they often don't clarify the figures with accepted benchmarks such as dBV, dBu or Vrms. Nevertheless, the spec will typically be the Vrms maximum output level, not nominal. Generally, if the spec doesn't specifically say nominal output, you know it's max. And if it doesn't specify peak voltage, you know it's RMS. Clear as mud?

And it gets worse. Often the output voltage spec given is a grossly inaccurate figure. For instance, my AVR is rated at 1 volt, but my measurements determined its output was actually well over 4 volts.

On the professional side, things are better with amplifier sensitivity specs, at least in some ways. You can generally rely on the stated figures’ accuracy, but the problem is that here is no definitive standard for stating the specification - or even what to call it. Nice, huh? 

Probably the most commonly used measure for a sensitivity spec is Vrms, a.k.a voltage RMS. For our purposes, this is the benchmark we need to know in order to determine if an AVR will drive the amp in question. If you see a sensitivity figure stated as some other measure, it will need to be converted to Vrms. 

The second-most commonly-used sensitivity benchmark is dBu – as in +3 dBu. In order to convert dBu to Vrms, just plug the figure into this handy on-line conversion calculator. Thus we can see that +3 dBu = 1.09 Vrms.

Complicating matters further, some companies (like Yamaha) seem to go the extra mile to confuse us, delineating sensitivity specs with plain dB figures, as in 3 dB. If you see a spec like this you can be sure they are referring to dBu, as that is a typical pro audio benchmark.


*How to determine your AVR’s output voltage*
Now that we know how to divine sensitivity specs, how do we determine if our AVR has sufficient output to drive a professional amp? As it turns out, it’s not that hard to determine an AVR’s output voltage, despite the manufacturers’ best (or worst) efforts to keep us in the dark. 

The following exercise is based on the premise that in this digital age the maximum input signal a receiver will ever see from a DVD player (or other digital source component) will be a 0 dBFS signal, generated by the media. If you recall from Part 2, consumer audio has a relative uniformity throughout the signal chain, from the source media on down. So we know that the DVD player will be able to resolve a 0 dBFS signal from the disc, and the receiver can also handle the signal from the player. So, all we have to do is feed the AVR a 0 dBFS signal, turn its volume control to the maximum setting, and measure the voltage at the pre amp outputs. Piece of cake. From that point we can determine the maximum clean output.

This exercise will let you check the voltage from the front left, right, center, or subwoofer outputs. The required tools and recommended supplies include:


Digital multimeter.
Test disc with reference signals (available at the end of the article).
A long RCA cable ~6 ft. or so.
RCA coupler.
Female-to-female "turnaround" XLR adapter (for balanced outputs).













​

Following this article are a few reference test signals for this and other gain structure experiments we will perform. Download and burn them all to a disc that you can use in your DVD player. For this particular exercise we will use the “60 Hz 0 dbFS Reference Sine Wave.” 

You will also need a volt/ohm meter (VOM), preferably of the digital variety. Fortunately it doesn’t have to be anything high-end. These days most digital multimeters, even relatively inexpensive ones, are able to provide very a high degree of accuracy, usually around 0.5%. We’ll use a 60 Hz signal for this particular exercise because that’s the frequency digital VOM’s in the USA are typically calibrated for. And indeed I found that with other frequencies the reading could be off by 1/10 volt or so. (For our friends in places with 50-cycle power, the 60 Hz signal is fine for you as well. The difference between a 50 and 60 Hz measurement will be non-detectable.) The only thing to keep in mind is that most meters give an _average_ or nominal reading, not peak or RMS. Fortunately average is closer to RMS than peak, and the sine wave signal from out test disc show up as RMS at the AVR’s outputs.

You can use either analog L/R connections between the DVD player and the AVR, or a digital connection, but the latter is generally preferable. *Don’t forget to change the speaker setting in the AVR’s menu to “Large.”* Set the receiver to “Bypass” or “Stereo” mode if you want to measure the main left or right channels, or to “Dolby Pro Logic” if you want to measure the center channel. 

We’re ready to proceed. My AVR has all speaker-level settings referenced to the main left and right channels, which are fixed and cannot be adjusted in the menu. If your AVR allows for adjustments for the front left and right channels, they should be set to maximum for this exercise, as should the center channel and subwoofer if you intend to measure those too. In addition, make sure any auto-EQ functions are disabled and/or tone controls are adjusted to flat (0 dB gain).

After setting the output levels, *unplug all speakers from the AVR!* Once that is done you can start the test disc, turn the AVR’s volume all the way up, and measure the output for the desired channel with the VOM. The meter should be set for *low-range AC voltage.* The measurement will be taken across the RCA connector’s tip and sleeve. If you don’t have easy rear access to your receiver, an RCA cable plugged into the output you want to measure and routed to where you can get to it is very helpful. It’s easier to take a measurement from a female RCA than a male, so you might want to acquire an RCA coupler before you start. 

For measuring a balanced output, use a standard mic cable with a female-to-female XLR converter. Without the converter you’ll be taking a reading from the little pins of the male end of the cable, which is tricky. It’s much easier to stab the meter probes into the sockets of a female connector. Take the reading across Pins #2 and #3.











*Cleaning things up*
Write down the voltage reading you obtain for future reference, as we have another issue to deal with. We need to determine if the pre amp is putting out a clean signal at that voltage level. The best way to establish this seat-of-the-pants is by connecting a pro amp to one of the receiver’s main outputs, and reconnecting a speaker. (No need to connect both, as we’re only testing one. Anything we find or measure will translate to the other channels.) 

For this exercise we will use a 1 kHz 0 dBFS reference signal. Start with the amp at the lowest gain setting that will get sound from the speaker. Start the test signal, and slowly increase the receiver’s volume control. While doing so adjust the amp’s gain as needed to hear the signal well. At some point you will hear an “overtone” added to the signal. This is harmonic distortion generated by the pre-amp clipping, so reduce the receiver's volume until the overtone is no longer audible.

At this point, *do not change the AVR’s volume setting*. Disconnect the amplifier from the AVR (probably best to turn off the receiver and amp before doing this). Then run the 60 Hz reference signal once again and take a new voltage reading from the pre amp output. Make a note of this reading, for future reference, as it is your pre-amp’s _usable output voltage._ *Also, make a note of the volume setting on the AVR for this reading.* This way when you do the gain structure exercise outlined in Part 9 you’ll know where to set the AVR’s volume without having to measure all over again.

If you don’t have an outboard pro amp to assist in precisely measuring the AVR’s clean output, you should be safe reducing your maximum-measured voltage by 30%. My test receiver had a maximum output of 5.8 volts, but distortion set in at 4 volts, which is 30% lower than the maximum reading.


*The lowdown*
If you’re interested in the subwoofer output voltage, do not assume it will be the same as the main channels. Often you will find that the subwoofer output is significantly higher than the main channels, so it’s not a bad idea to measure it as well. Make sure the subwoofer is enabled in the AVR’s menu, with the crossover set at the highest possible frequency, and the AVR set for “Bypass” or “Stereo” (hopefully your AVR will keep the subwoofer enabled). Adjust the AVR’s subwoofer trim level to the maximum setting, with the master volume setting also at max. Then, measure the sub output with the 60 Hz test tone. 

There’s no need to do an audible distortion test for the subwoofer output. What you can do is extrapolate the “clean” voltage figure from your two main-channel figures. Use the maximum-voltage and (lower) clean-voltage readings to calculate what percentage the clean voltage figure is below the maximum reading. For instance, if your maximum main-channel voltage was 6.5 VAC and the clean voltage was 5.5 VAC, that’s a 19% difference. So, subtract 19% from the maximum subwoofer voltage reading and you’ll be in the ballpark for your “clean” voltage output figure. Note that this figure is merely for reference when choosing a compatible amplifier (see below). There’s no need to adjust the AVR’s subwoofer trim, as it will be automatically reduced to “clean voltage” when the AVR’s _master volume_ is turned down to the “clean voltage” setting.


*Shopping time!*
The clean-voltage figures you measure from the AVR’s outputs can be cross-referenced to a pro audio amp’s Vrms input sensitivity specification with reasonable accuracy. For example, an amplifier I had on hand for evaluation has a sensitivity spec of 1.0 Vrms, and I was able to activate its clipping indicators with a measurement of 0.8 volts from the AVR’s outputs.

Once you know your AVR’s usable output voltage it’s easy to determine if a professional amp you’re considering will be compatible. Just choose one with a sensitivity spec that matches or (preferably) is lower than your receiver’s usable output voltage. You’ll seldom see amps with sensitivity higher than about 1.5 Vrms. Most modern AVR’s (except perhaps those at the low end) should have no problem driving a professional amplifier, as your own tests will probably confirm. So, level-matching between AVRs and professional amps with outboard signal boosters is virtually a non-issue.


Attached below is a step-by-step guide to measuring the voltage output from pre amp outputs. You can print it off to have close at hand when performing the exercise.


----------



## Wayne A. Pflughaupt

*Part Eight: 

How to Determine if Your Equipment Maximizes Dynamic Range *​

*Putting together a quiet pro/consumer system: Easier said than done*
The necessary objective of a high-performance home audio system that combines professional and consumer equipment is to achieve the best dynamic range. But what is the best way to accomplish that in a system where peak signal capacities and noise floors can vary tremendously? Simple: Choose your equipment carefully. 

Naturally, you can’t hope to maximize the dynamic range of a hybrid home theater system if the background noise levels of the individual components is a “big unknown.“ To that end it’s helpful to know the differences between pro and consumer noise specifications, as we discussed in Part 3. But the sad news is, you can only trust published specs so far. This is true of both home and pro gear. 

It should be obvious that your system can only be as good as the weakest link in the signal chain. Despite what some say, noise isn’t cumulative when adding components. In other words, you don’t automatically get additional noise with each additional component. If each component has a true 100 dB noise floor, that will be the noise floor of the system. The noise floor only suffers when a component with (for example) an 85 dB noise floor is added. Now the system will never be quieter than that.

As we discussed in Part 4, the dynamic range of a hybrid system will typically be limited at the low end (noise floor) by the pro gear, since it may not be as quiet as the home equipment, and at the upper end (signal peak) by the AVR, since most are not able to generate an output level anywhere near the +26 dBu upper-range capacity of the professional gear in the system. 










It should go without saying that the AVR's noise floor is critical. If you ultimately need to employ a signal booster, you need an ultra-quiet AVR. It should be obvious that a signal converter is going to boost everything the AVR generates, including any noise that may be present.

Fortunately, most AVRs have excellent noise floors, but they are not all created equal. As an example, consider the gear I currently have on hand - a handful of Yamaha flagship AVRs from years past. 

The one I’m currently using in my system, a RX-V1 that’s nearly ten years old, has a noise floor so low it’s practically a “black hole.” Even with the volume control turned all the way up with no signal present, the background noise is so faint you have to strain _hard_ to hear it with your ear directly to the speaker. The RX-V1’s predecessor in our home theater was a model DSP-A3090 that dates to about 1996. It's not as quiet as the RX-V1: With your ear to the speaker you can easily hear some residual hiss. Practically nothing, mind you, and certainly nothing you’d ever notice in everyday use. But it is there. Going even further back I have a DSP-A2070, a Dolby Pro-Logic flagship that dates back to 1993. The 2070 generates even more background hiss than the 3090, and some residual 60-cycle hum as well.

You’ll be surprised to learn that all three of these old Yamaha flagships _have the same signal-to-noise spec._ How can this be, you ask? Well, behold the “magic” of the A-weighted S/N specification standard (for those of you who skipped ahead, this was discussed in Part 3). 

On the professional side, we have processors like the Electro-Voice DC-ONE and the Behringer Feedback Destroyer as poor examples of the type. When it comes to hiss these two are noise machines, despite the EV’s stated 111 dB dynamic range and the BFD’s hum and noise spec of >96 dBu. The BFD is especially noisy when its gain switches are set for +4 dBu. By comparison, I’ve used analog pro equalizers with the _exact same_ noise spec as the BFD that were virtually dead silent.

So - specs are a good place to start when choosing your equipment, but when it gets right down to it you should evaluate for yourself each piece that you add to your system in order to know exactly what you have. And don’t assume you can rely on price as an indicator. That DC-ONE, for instance, sells for between $600-900. And those Yamaha flagships had list prices of $2,000 to $3,000.


*Testing your components for noise*
If you’re dealing with the components in your subwoofer signal chain, for the most part there’s no need to be concerned about poor-quality equipment adding noise. Hiss generated by processors like the BFD equalizer will only be heard in the higher frequencies, not the low frequencies. Occasionally sixty-cycle hum can be an issue, but that’s about it in most cases.

But when it comes to your full-range signal chain, it’s never a bad idea to know what you have. I’ve come up with some crude but effective experiments the average hobbyist can perform that will tell what you need to know, despite what the specs say. The tools needed are:


Sound level (SPL) meter.
Test disc with reference pink noise signal (download available at the end of the article).

I’ve found that pink noise works better for this exercise than sine waves. 

The best way to accomplish this procedure is to put your two front L/R speakers right next to each other. It will also save you a lot of steps (as in walking) if you move them close to your equipment rack as well.

We’re going to take a look at both outboard processors and pro amps. Since typically a processor cannot be used without outboard amplifiers, we’ll start with the amplifier. Basically we want to determine if the amp, especially if it’s a professional model, is as quiet as the reference unit, which is the AVR. 

If you’re testing a pro amp, the first step to gain-matching it your receiver’s internal amp. Connect the amp between say, the right channel of the receiver and the speaker; start with the amp’s gain control all the way down. Leave the other speaker connected directly to the left channel of receiver. 

Make sure any auto-EQ functions are disabled and/or tone controls are adjusted to flat (0 dB gain). Start the pink noise signal, pan the AVR’s balance control to the speaker connected directly to the receiver, and adjust the volume to where the SPL meter is reading somewhere in the 70-75 dB range. Then pan the balance control over to the channel connected to the pro amp. Increase the amp’s gain until the test signal gets the same SPL reading as the other speaker. When the SPL readings for the two speakers match, you have level-matched the outboard amp to the receiver’s built-in amp.










With the amp level-matched, you can stop the pink noise signal (that’s always a joy!) The next step is to switch the AVR to an unused input (to remove the source component from the signal chain), then turn the AVR’s volume control all the way up, and listen for any noise you might hear from the speakers. What you want to do is compare the noise level from the two speakers, with the one connected directly to the AVR as the reference. 

Hopefully you’ll have to put your ear right up to the speakers to hear anything. If so, this is good! If you can hear hiss, buzz etc. from several inches back - not good. 

If you can detect a higher noise level from the speaker connected to the pro amplifier, it means the amplifier is not as quiet as your AVR. If that speaker _is_ as quiet as the one connected straight to the AVR, that’s good. It means the amplifier is as least as quiet as your AVR. Actually it could be quieter, but if so you won’t be able to tell. Remember, your noise floor is only as good as the weakest link in the signal chain. If the AVR generates more noise than the amp, it’s going to swamp the amp’s “quietness” (my apologies for the juvenile terminology).

The next step is to evaluate an equalizer, electronic crossover, digital speaker processor, etc. that you might want insert between the AVR and outboard amplifier. You can do this one of two ways: (1) Compare the AVR direct-to-speaker signal chain with the AVR/processor/outboard amplifier signal chain...










...or (2) Compare an AVR/outboard amplifier signal chain to the AVR/processor/amplifier signal chain. If you discovered from the previous experiment that the amplifier added some noise, then I’d suggest going with (2), the reason being that you should use the new noise floor as your reference.










*Turn off the AVR and amplifier before connecting the processor in-line between the AVR and amplifier*. After it’s connected turn on all the equipment and _make sure that any and all level controls and filters for the equalizer/processor are set for flat (i.e. no gain)._ Also, if you’ve added the amplifier to the left-channel signal chain (shown above), adjust the left gain control until its setting matches the right.

As before, we want to see if the channel with the processor generates more background noise than the channel without it, so do the ear-to-the-speakers test again with the AVR volume at max. It’s a good idea to check the processor in its “bypass” mode, as well as “engaged” with all gain settings set to flat, just to see if the latter gets any additional noise. After all, you aren’t going to be operating the processor in bypass mode.  (It’s probably a good idea to run the AVR’s volume down before switching the processor between “bypass” and “engaged.”) A good processor should show little-to-no difference in background noise between its “bypassed” setting and “engaged” with all gain adjustments set to flat.

So - what if you find that the pro gear you’re adding to your system adds background noise? Well, our crude “ear to the speaker” test is one thing, but what matters in the end is whether or not the extra noise is at an objectionable level during normal operations. Once you’ve accomplished a proper gain structure (which we’ll discuss next), you can make that evaluation with your system volume control set to the level you normally listen at. Typically any noise will be lower than what you saw with the “ear to the speaker” test with the volume maxed out.


Attached below is a step-by-step guide for the background noise analysis. You can print it off to have close at hand when performing the exercise.


----------



## Wayne A. Pflughaupt

*Part Nine

How to Perform an Optimal Gain Structure*​

*At last, the moment we’ve been waiting for!*
Okay, you’ve carefully chosen the AVR, signal processor and amplifier(s) for your pro/consumer home theater system. Now we can work on getting a good gain structure.

The mystery and confusion surrounding gain structure in a mixed pro/consumer system can be easily resolved once you get past a few preconceived notions. The first is the expectation of achieving a textbook gain structure by pro audio standards when you’re starting with a low-output home audio front end. It isn’t going happen, so forget it.

Next, a modern digital processor isn’t going turn into a noisy 8-bit piece of junk if it doesn’t get a +22 dBu signal through it. It will work just fine with a consumer signal level with no noise penalty. Any background noise it may have is fixed and will not change with signal levels, so choose your accessories carefully.

Likewise, low signal levels will not reduce the processor’s dynamic range. That makes as much sense as saying your Ferrari no longer has a 500 HP engine if you never drive it above 50 MPH. A pro audio processor is designed to accommodate the extreme signal peaks and crest factor of music in a live sound-reproduction situation. If a home theater system doesn’t wring every last dB out of a processor’s peak signal capacity, it doesn’t mean you’ve reduced its dynamic range. It only means you don’t _need_ all of it.

Once you get past these things, you’ll find that gain structuring a mixed system is pretty easy. The only legitimate concern is whether or not the AVR has enough output to drive a pro amp. If it can, all that needs to be done is to set the amp’s gains so that it reaches clipping at the same time as the AVR’s pre amp.


* Let’s gain structure!*
The required tools for the gain structure procedure include:

* Test disc or other source with reference pink noise signal (available at the end of the article).

My experiments have shown that pink noise is a better signal for a gain structure process than the sine waves we used for our other experiments, for a number of reasons. For one, pink noise is a random signal (meaning it varies in intensity) which makes it a better representation of the real-life signals we get from our source media. Using Audacity, the overall level of our reference signal has been adjusted a bit below 0 dBFS, but peaks up to 0 dBFS. (Remember, a 0 dBFS signal is the strongest that your DVD player and AVR will ever see.)

In addition, pink noise is broadband, meaning it contains the full audio frequency spectrum from the highs on down to the lows. This makes it useful for both the main channels and subwoofer. It’s also the ideal test signal if you’ve employed equalization in your system, either from an outboard unit or what might come with the AVR (i.e. its tone controls or auto-EQ such like Audyssey, MCACC). The equalization hopefully resulted in reasonably-flat _in room response_ from the speakers, but in doing so it means _electrical_ response is no longer flat. So if you try to gain-structure an equalized system using frequency-specific sine waves, such as the 1 kHz or 60 Hz signals we used previously, it would throw everything off if equalization had boosted or depressed those specific frequencies.

After equalization, some frequencies will naturally be electrically “hotter” than others, and that will cause the electronics to clip at those “hotter” frequencies first. Obviously clipping is what we want to avoid at all costs, no matter where in the frequency spectrum it may occur. So we want to perform the gain structure procedure with all equalization on-line.

To that end you will want your receiver to run its auto EQ function _before_ you begin your gain structure process. You won’t need to worry about maximizing the amplifiers’ gain settings for this – just get them up enough to allow sufficient volume for the auto EQ feature to work. (Sure, the receiver will also adjust the speaker levels at the same time, but we well be manually re-setting them when we begin the gain structure process.)










*Example of System Electrical Response After Equalization*​

If you’re using consumer-grade amplifiers in your system, there’s no gain structure process to worry about. Since home audio benefits from a relatively uniform signal chain (as discussed in Part 2), it's “plug and play” all the way. However, in the event that your consumer amps do have gain controls, set them according to the manufacturer’s recommendations. Or, if they have clipping indicators, you can follow the steps outlined below.

That leaves us with systems that mix pro and consumer gear. If you have a pro audio equalizer or other processor, just connect it between the AVR and amp (either consumer or professional) and you’re done. The processor has no level-adjustment requirements (as thoroughly documented in Parts 4 and 5).

Now, on to the pro amplifiers. To begin, *make sure all speakers are disconnected from the amplifiers!* Also, make sure all speaker-level settings in the AVR’s menu are set to max, including the subwoofer output. Begin with the amplifiers’ gains all the way down (counter clockwise), and make sure the AVR is set for “Stereo” mode – i.e. a straight two-channel signal. Please note, some receivers or pre-pros may bypass all internal equalization when set for "Direct" of “Bypass" mode. Do not use those. As mentioned above, we want all equalization to be in place and active.

Start the pink noise signal and turn the AVR’s volume control to the setting you previously determined delivered the highest clean (undistorted) signal. (See Part 7 if you’re unsure where that is). Then, increase the amplifier’s gain control until the clipping indicator begins to blink. Keep increasing the gain until the flickering indicator resolves to being lit steady. What we accomplish by setting the amplifier’s gain to clipping with a just-below clipping signal from the pre amp is that both will reach clipping at the same time in actual use. This is the goal of a successful gain structure.

Repeat the process for all outboard amplifiers in the system. If you’re also using amplifiers for the rear channels, we can’t give them a dedicated signal, but you can connect them to the AVR’s main channel outputs for this gain-setting processes, then move their connections back where they belong afterward. Center-channel amplifier gain-setting can be accomplished by switching the AVR to Dolby Pro Logic.

If you’re using an AVR, a good way to double-check your settings for the main-channel amplifiers is to re-do the level-matching exercise outlined in Part 8: Plug in a speaker to the amplifier, play the pink noise, adjust the AVR volume to something between 70-75 dBSPL. Then move the speaker wire from the amplifier to the AVR’s speaker connection (i.e. its built-in amplifiers) and re-measure the SPL. The two readings should either match, or be 1-2 dB louder for the outboard amp. If the SPL reading for the outboard amplifier is lower, it means the pre amp will be running at higher volume settings than before in normal use. And that means the pre amp will reach clipping before the outboard amplifiers will - not good.

Double-checking is a bit “iffy” if you’re using a pre-pro, since there are no built-in amplifiers for comparison. However, if you find you’re consistently using higher volume settings than with the consumer amps you were previously using, that may be an indicator of a less-than-optimal gain structure. Just keep an eye (ear?) out for audible distortion, especially during demanding action-movie scenes. If needed, re-perform the gain structure process with a lower pre-amp volume setting than before, or you could just bump the amplifier’s gains up a notch or two.

If you have a fully active system with a speaker processor and amplifiers for each driver, the levels between the various amplifiers will need to be adjusted in order to get the proper blend between the tweeters, midrange and woofer speakers. My suggestion would be make the necessary adjustments by _reducing_ amplifier gains (or the processor’s signal to the amplifier), not increasing. We’ve already adjusted gains for highest setting before clipping, so moving any amplifier gains higher (or increasing the signal level to the amps via the processor) will push them into clipping.


*Additional info for analog equalizers*
If you are using a professional analog equalizer, they typically have a master gain control marked in ± dB increments. The primary purpose of the master gain is for what is commonly called “make-up gain,” meaning to compensate for an overall post-EQ change in audible volume level. Typically it should be set at the straight-up at 0 dB, but if you notice using a music source that the volume noticeably changes when the EQ is bypassed, the gain can be adjusted up or down so that the EQ’d and bypassed volumes match. However, for home theater this should only be an issue with subwoofer EQ. If main-channels equalization gets a significant difference in bypassed vs. EQ volume, I’d suggest that you’ve probably gone overboard with your equalizing. Regardless, if you find you need to adjust the equalizer’s master gain from its neutral 0 dB setting, you should recalibrate your amplifier gains as well.


*In the event that you really do need a signal booster...*
If for some reason your AVR can’t deliver enough signal to drive the amp to clipping, or if you feel that the high gain settings required by an AVR with marginal output strength are causing undue noise from the amplifier itself, then you’re a candidate for a level compensation device like the ART Clean Box or Aphex 124-A et al. However, you should use it judiciously. A signal booster should have an on-board gain control - if it doesn’t, get a different one. With your pre-amp set at its maximum usable (clean) setting, adjust the signal booster’s gain just enough to get the amplifier’s clip indicators to light, and no more.

However, a better solution would be replacing your AVR with one that has adequate output to drive an amp.

As detailed in Part 4, going overboard with a signal booster can increase system noise, especially in the main channels. You don’t want to push a signal booster any higher than needed, and you most certainly _don’t_ want to insert it behind the noisiest component in the signal chain. In other words, if your “ear to the speaker” evaluation (Part 8) showed that your digital processor generated more noise than the AVR, then you should connect the level compensation device after the AVR, not the processor. You don’t want to be further boosting the noise from a noisy component.


*Say “goodbye” to gain structure confusion*
That’s it! You’re done! All that’s left to do at this point is adjust all the AVR's speaker and subwoofer levels using your usual method, so if your receiver or pre-pro has Audyssey or another auto-calibration feature you can run it again at this point.

Wasn’t that amazingly easy? No hand-wringing over pro-level +4 dBu signal required, just a simple matter of lining up the pre amp and amp’s points of clipping.

If for some reason you end up with distortion in actual use, it will most likely mean the AVR or pre amp is reaching its limits before the amplifier (i.e. you’ll hear the clipping before the amplifier’s clip lights activate). This could be the case if your front end bypasses all internal equalization when in "Direct" or "Stereo" mode. All that needs to be done in this case is re-calibrate the gain structure with a lower AVR volume-setting to get the desired results. 

In the end, what you don’t want to see is the amplifier’s clip indicators lighting up every time an explosion hits in an action movie. Some people seem to worry that they’re not getting all of the amp’s power output if they don’t see those clip lights blinking all the time. No, what you want is to _never_ see the amplifiers clip. If they do, it’s time to think about moving up to more power.

Please keep in mind that these are *general* guidelines. It’s impossible for me to conceive of or address every system configuration or scenario in existence. In fact, I intentionally kept things on the “general” side to cover the broad middle ground. However, following these guidelines HT enthusiasts hopefully now have the tools to determine for themselves what their system needs. 


Attached below is a step-by-step guide for the gain structure process. You can print it off to have close at hand when performing the exercise.


Information culled from numerous resources including Wikipedia, ProSoundWeb, Tape Op Message Board, Gearslutz and Rane Corporation’s on-line reference library.

Thanks to brucek and Ricci of HTS, Bob Lee of QSC, liko81 of Church Media Community, and the late Brad Weber of Muse Audio/Video (and numerous professional audio boards) for their assistance with this article.


----------



## Wayne A. Pflughaupt

* Downloads for Gain Structure Reference Test Signals *
​_*Please note that the links below are broken since the recent Forum upgrade. Anyone needing these test signals, PM me and I'll email them to you.*_
​

Here are the test signals needed for gain structure tests and evaluations. In most instances these will need to be downloaded to your computer and burned to a disc. If your browser does not automatically give a “Save” prompt when you click the links, then right-click to open the dialog window, then left-click “Save Target As.”


1 kHz Sine Wave 0 dBFS Reference Signal

60 Hz Sine Wave 0 dBFS Reference Signal

36 Hz Sine Wave 0 dBFS Reference Signal

Pink Noise Signal​


----------



## Wayne A. Pflughaupt

Please refer any comments, questions or discussion about this article to the Home Theater Gain Structure Discussion Thread, as well as questions about acheiving a gain structure for your system.

Regards,
Wayne


----------

