Stepping Over the Line: Clipping
Audio recording is not like real life sound. Engineers are challenged to capture and process acoustic energy…sound waves… with an array of equipment such as microphones, preamplifiers, amplifiers, recorders and more. And unlike our ears, which can accept an incredible range of frequencies and amplitude levels, audio equipment must exist within a set of hard limitations. Stepping outside of those boundaries causes fidelity to suffer…and it happens all too frequently.
Audio recording quality has been steadily improving since those first tentative steps back in the latter part of the 19th century. Before electrical amplification and vacuum tubes, acoustic recordings translated the physical movement of a diaphragm into oscillations cut into a piece of lacquer. The dynamic range was constrained by the acoustic horn used to “gather” the sound played by the musicians. It was really hard to generate sounds that would “clip” before amplification came along in the late 20s. The size of the fidelity “box” was pretty small but consumers loved the novelty of hearing prerecorded music and other programming.
When analog tape recording equipment was introduced in the 50s, fidelity took a huge leap forward (not to mention rerecordability). Engineers were able to bring the signal from high quality microphones through rudimentary consoles (using microphone preamplification) into the line level inputs of early Ampex monophonic decks.
The standard recording level of the time was 185 nWb/m [nanoWebers/meter…the measure of magnetic flux used in establishing levels for analog recording]. This number represented the “0” point on the VU meters of your recording machine. Recording levels certainly exceeded this value but as the level increases beyond 0 VU the ability of the equipment and most importantly the tape to capture that level without distortion decreases. The difference between the 0 level and the point at which 3% total harmonic distortion happens is called “headroom”.
Anytime the level of a recording exceeded the headroom available on the tape, the signal would be “clipped” and the amount of distortion deemed excessive. The challenge faced by audio engineers then and now is to maximize the dynamic range of a recording by keeping levels very close to the maximum headroom without going over. In the analog days, one of the things we all did prior to a take was request a test segment that represented the loudest that the band was going to produce. We would set the recording level so that the incoming signal would peak just below the 0 VU level.
As analog tape formulations and recording machines improved, headroom increased as well. We’ve moved from figures from 3 dB to 10-12 dB to 15-18 dB and even higher with contemporary machines. The potential for better fidelity using analog tape is greater than it has ever been. Noise reduction systems such Dolby Spectral Recording or “SR” have improved available dynamic range even further. Modern analog decks can achieve signal to noise ratios that eclipse 90 dB…almost as good a 16-bit PCM compact disc!
When I was just beginning my career as a second engineer, we would routinely push recording levels to “elevated” level or +3 dB above 0 VU (250 nWb/m). It is not uncommon to see level at +9 or more today.
Of course, the headroom offered by analog tape isn’t the only stage in the signal path where clipping can occur. It is possible to exceed the capabilities of microphones, preamps, channel strips on a console and output stages, but the recording machines have always been the bottleneck. This remains true even in this era of 24-bit PCM digital equipment.
More on clipping…tomorrow.