Mythical 20-bit CDs! Part I
There is a specification document that was produced by the developers of the compact disc back in 1982 called the Redbook (there have been lots of other colors added to the spectrum of colored specification documents but this was the first!). The Redbook provides detailed information on both the physical and electrical characteristics of CDs. If you wanted to get into the CD business as a replicator or a Digital Audio Workstation software house, you had to purchase a copy of the Redbook specification for $5000…and yes, I did get a copy (although without the huge cost!)
It made for very interesting reading. The physical parameters are laid out in a chapter so that people that manufacture stamping lines would know how much polycarbonate to inject into a CD mold and optical drives designers would know how big to make their trays. The pit size and density, mastering requirements, reflectivity of the aluminum layer and the procedures for making stampers were all contained in the Redbook specification. It is the Bible for compact discs, period.
My interest was in the data structure of the discs. There are 1/75 of a second blocks that are used to identify and locate individual locations on the disc for Tracks and Indexes. There are spaces at the start of every track before the start of the first index that are called “pre-gaps” and lots of other boring details about the structure of the discs that matter to someone. For me, I wanted to be able to tuck computer data in with the music data without a standard CD finding and trying to play the data as sound. This was a major issue back in the late 90s and was called the “track 1” problem. I’ll write a whole post about that adventure at a later time…but I was the first to make a disc that solved the issue.
The electrical characteristics were also spelled out in the Redbook. The coding scheme for the data words and the sample rate were established after some wrangling between the Japanese engineers and other electrical engineering professionals. They talked about using only 14-bit words but thankfully settled on 16-bits. The sample rate of 44.1 kHz was somewhat new to audio at the time. It’s a derivative or the video rate 44.056 kHz, which was used on the first commercially available digital audio recording system…the heralded Sony F1 processors. They took care of the audio conversion (AD and DA) and sent video bandwidth signals to a conventional beta or VHS video machine. I have lots of these tapes in my archives and I even have a SONY 601 (the AC powered component version of the F1). Now all I have to do is find a functioning Sony Beta machine…that’s the real challenge.
Anyway, the world was introduced to an audio format in the fall of 1982 with the arrival of the first Sony CD Player…the CD-101. The machine AND all subsequent machines have been built to the Redbook specification. This means that the best they can muster in terms of playback fidelity is 16-bits at 44.1 kHz sample rate. There is nothing else they can do!
HDCD did create converters and machines that would encode and decode a couple of extra bits worth of data in the least significant bits of the 16-bit linear digital words…and it worked. But it required special equipment…outside of the Redbook specification.
I looked into HDCD a while back and decided that its process is analogous to Dolby B or C NR for tape… but vastly more refined and with completely different technology. But in essence, it seems to involve dynamically compressing low level information prior to disc production, then reversing it using the HDCD playback circuit in the consumer’s player. Maybe after reading Part II, I will know if I was on the right track. 🙂