|
HIFICRITIC Volume 8 No. 3
In
recent months the audio media seems to have latched onto the term High
Definition or HD Audio, which appears to mean whatever you're trying to sell.
For digital audio it commonly refers to extended bandwidth recordings with a
typical dynamic range capability of up to 120dB. That's a quite scary figure
as it represents a difference between the signal voltage of the quietest and the
loudest sound of one million times. Compare that to the days when we would
happily listen to vinyl with a dynamic range of 60dB (a mere one thousand
times). This leads to two questions: can we realistically design a playback
system that good; and if we can, do we need to? In technical terms the dynamic range is the
difference between the maximum output level and the noise floor of the system.
So if a system is capable of delivering 100W into 8 Ohms, its maximum output
level will be 28.6V. If the typical residual noise level is 0.286mV, the
difference between the two voltages will be 100,000 times which is 100dB.
Delving further into this definition, noise is basically everything but our
input signal, so that now includes all the distortion products (harmonic and
intermodulation) plus all noise sources including mains hum and electromagnetic
radiation. So for faithful HD Audio replay, all the distortion products need to
be below -120dB (0.0001%). I reckon most people probably set their listening level so
that the ambient noise on a CD is just a smidgen above the listening room noise
level; typically 50dB. It's slightly reassuring to hear a hint of something
before the music starts, rather than getting a heart-attack inducing slug of
music from a silent background. So with the lower level of our volume range set
it's simply a case of ensuring that all the loudest sounds are reproduced
without clipping. I say simply, but in fact there's nothing simple about it.
Stick an oscilloscope across the loudspeaker terminals when playing music at a 'realistic' level, and it's easy to see the peaks being chopped when the
amplifier runs out of volts. Obviously this happens more frequently with a 40W
than a 1000W amplifier, and in my experience, all other things being equal, the
higher power amplifier does sound more convincing. Actually this is fairly easy for an engineer to prove. Take a
powerful amplifier and wire its output stage to a power supply which can be
separately controlled. By winding the supply voltage up and down, the maximum
output voltage swing (and hence the rated power output) can be controlled, and
heard, if there is a difference to be heard. The trouble is that to win a
significant increase in dynamic range the required amplifier powers tend to run
away from you. Let's start with a typical amplifier of 50W output into 8 Ohms
(put another way, 20V into 8 Ohms). To increase the dynamic range by 20dB will
need the maximum output voltage to be increased by ten times, to 200V. This
translates into 5,000W, for what sounds on paper like a relatively small
improvement! It all too clearly illustrates how little difference the power
rating makes. So if you swap your 50W amplifier for a 200W model, how much
dynamic range do you gain? Well, you double the maximum output voltage in order
to gain 6dB. Often the differences heard after such a swap have more to do with
the increased power supply stiffness and higher output current capabilities of
most big amplifiers. And a lot can depend upon the music recordings. Much
mainstream music is now so heavily limited and compressed, the peaks have
already gone. You simply set the volume to an insanely loud level and enjoy the
40dB dynamic range. But with more naturally recorded music there's a huge
benefit in preserving the peaks. I can still recall my first experience of the Phase Linear 700
amplifier in 1971. This first 'Super Amp' pumped out well over
350W/ch and became the backbone of many rock band PAs. In many ways it was an
awful amplifier that would self-destruct at the slightest provocation, but it
would swing an awful lot of volts at its output terminals. When substituted for
the more typical 25W amplifiers of the era it could therefore increase dynamic
range by a worthwhile 3dB, a difference you could really hear, even if it was
hard to describe. Some years earlier when I had first discovered hi-fi, most
good amplifiers used valves and had a quite modest power output, with the
assumption that they would regularly be driven into clipping. Part of the design
brief was to ensure that the clipping was gradual and 'soft'. The audible
effect was consequently more like compression and thickness rather than
harshness in the sound, and it acted as a warning to back off the volume. But the advent of transistor amplifiers brought higher average
power outputs, and designers ceased to worry about clipping. However, its
effects were now much worse because higher levels of negative feedback around
the transistor amplifier resulted in very 'hard' clipping. The application
of feedback extends the bandwidth, lowers the output impedance, lowers the
distortion, and lowers the gain. And what actually happens is that when the
output clips, the loudspeaker line reaches one of the power supply rails, so no
signal flows back down the feedback loop. The gain of the amplifier then shoots
up a hundred fold (or even a thousandfold), bringing a degree of uncontrolled
chaos to the amplifier's operation. The end result is unpleasant and nasty
clipping. Does the fact that it won't play the music really loud
matter that much? Yes it does. A few years back an AES demonstration used a 250W
amplifier and low-efficiency loudspeakers (84dB/W/m). With percussive music
(piano etc.), clipping was occurring even when the average power output was only
about 2W. In fact, to reproduce a piano at a realistic (109dB) level on that
system, you'd need a 325W amplifier to avoid clipping. Score one for high
efficiency loudspeakers, because with 90dB/W/m models that power requirement
drops to under 100W/ch. I remember addressing the problem when I was first at
Cambridge Audio in the 1970s. We were starting to see lower efficiency
loudspeakers, and our 25W to 60W amplifiers were audibly clipping on some
demonstrations. So when we introduced an innovative new Cambridge Classic
One model, I underpinned the circa 80W/ch output with a novel
compression loop using FETs. As the output reached within 2V or 3V of the supply
lines, the gain was reduced, so the amplifier never quite clipped and signal
peaks became gently rounded. This avoided hard clipping and the amplifier
operation stayed controlled at all times. Listening sessions comparing the sound
with the circuit activated and de-activated clearly demonstrated its benefits
and we certainly seemed to be onto something worthwhile. Unfortunately the Classic
One launch coincided with closure of the St. Ives factory, the loss
of the team operating there, and the move to Byfleet. Among the consequent
chaos, this amplifier was shelved, and only the first 100 or so were ever built.
(Yet another case of what might have been!) While on the subject of lost causes, at the other extreme
I'll mention an amplifier technique I invented for Canon Audio in the early
1990s. This used a conventional 50W amplifier but added a second high voltage
supply which allowed more than 500W of output to be sustained for short periods.
Because this supply only had to maintain the power for a couple of seconds it
could be made very cheaply, and of course the heatsinks etc. only had to be
appropriate to a 50W model. It worked extremely well, but as I recall it died in
the belief that the hi-fi magazines might not understand the thinking and see it
as a confidence trick. Returning to the originally posed question, can we build a
system with an 'HD' 120dB dynamic range? It's quite a challenge. 100dB is
easy; 110dB can be achieved with a bit of care; squeezing out that last 10dB is
going to be tough, but no doubt somebody will do it, if only for the kudos. And
do we need to do this? Well the effect of amplifier clipping is quite insidious,
yet most people go through life unaware of what is happening; unaware that the 'better sounding' amplifier is clipping less often, or maybe clipping in a
softer manner. For myself I've always heard the benefits of doing the calculations and then buying the bigger amplifier. But I will admit that in one of my rooms I have a system with high efficiency loudspeakers (better than 100dB/W/m) and a pitifully small 20W output amplifier. And you know what? It just never clips. And that could just be why it is one of my favourite systems.
Subscribe!
|
|