Meanings, Messages & Signals

STEREOGRAPHICS

Technicians studied media formats with a view toward raising the excellence of the recording arts. It was quickly recognized that sounds were actually “reprocessed” whenever recorded and played back. Single microphone groups, all connected in series, did not record sounds exactly as the human ear hears. Also, single loudspeaker playbacks did not produce room sounds the way performers do.

Distinctions between “monaural” and “stereo” playbacks were perceived as the new frontier. Human aural capacity provided the model for technical design. Separate left and right channels would reach a new degree of media refinement, possibly bringing the art up to that “live performance feel”. But stereo recordings, however improved through the decades to come, continued sounding “like recordings”.

The very last improvement in this vein was a dismal failure. Multiplying stereo channels, and surrounding listeners with multiple loudspeakers, one always knew the difference between “Quadrophonic” reproductions and live performance. There was never a contest in this discernment What was wrong with the technology? .

NOISE

Because all analog recording media and playback transducers engage in direct physical contact, a medium-characteristic “noise” persists in playbacks. Technical analysts of the time therefore focused all of their engineering attentions on eradicating all the inherent “noises” of analog. Technicians actually believed that, when these noises were removed, the reproduction of recorded sound would become “life-like”. Eliminate the noise, it was thought, and the “live performance feel” would spontaneously appear.

Engineers therefore developed new noise-eradicating filtration systems. But even DOLBY noise reduction and DBX compression circuits failed to breach the “record” barrier toward life-like musical reproduction. An ancillary development came with the complete revision and modification of loudspeaker technology. Planar speaker technology, whether electrostatic or magnetic, was hailed as the only means for achieving “true graphic reproduction of sound”. But, even when playbacks were “spread out” across large theatrical spaces through enormous planar loudspeakers, listeners sensed a mysterious and defined “absence”. Like all the other costly improvements, the noise-reduced electrostatic loudspeakers could instantly be discerned from a “live performance feel”.

Hoping to close the obvious gap between recordings and human experience, undaunted audio engineers continued searching for that missing “audio component”. Their hope was that, when once the “missing” audio component could be isolated, the live feel would be restored to recorded material. Failure upon failure.

DIGITAL

Technicians imagined that the inability to reproduce the “live feel” was found in the very MODE of recording itself. “Analog”, they now claimed, was the “real” problem. The introduction of digital recording process was an outgrowth of post-War computer technology. It was again believed that digitally recorded sounds would reproduce the “live performance feel”.

Sound vibrations, in the digital mode, are not directly recorded. Digital recording technique “chops” incoming sounds several thousand times per second. This rapid “chop” rate is technically referred to as the “sampling rate”. Faster “chopping” means more accurate harmonic duplication. During each of these separate millisecond “chops”, incoming sound is not directly recorded as vibrations. It is converted into a continuous number code.

Continuous digital sampling produces huge number codes in chains. This requires enormous code-memory. DAT tape stores the codes as a continuous series of magnetic impulses. In CD storage, codes are burned into thin aluminum foil by a fine laser beam as perforations. When the CD is spun, a laser “reads” the tiny holes. Their interrupted codes become sounds when an on-board “tone generator” produces a wavering tone signal. The more accurately the source was “chopped” and encoded, the more accurately the on-board generator will produce its wavering tone. This wavering signal is perceived by the listener as a reproduction of the original sound source.

Because there are no frictive contacts anywhere along the digital recording path, there is no “noise”. Digital Industry promoted this “noiseless” feature. Noise was presented as a contaminant, for the germ-phobics to dread and hate. But here there was no noise. Clean. Sterile. Pure. Accurate, clean, noiseless, graphic, detailed, sharp, crisp… it was apparent that “audio perfection” had been achieved. Perfection. The summit.

ULTRA-HISS

When digital-eager consumers complained that early CD playbacks sounded “brittle”, engineers raised the sampling rate and established different “CD grades”. It was again believed that both higher sampling rates and higher “graphic detail” would “satisfy the ear”.

Despite this adjustment, numerous highly qualified analysts reported that CD playbacks continued to sound “cold and harsh”. Some stated that their ears actually “hurt” after hearing digital recordings. In addition, a strange “ultra-hiss” became the newly discerned noise of digital ware.

This strange manifestation was thought to be caused by the spaces between digital holes. The on-board tone generator voided the time-fraction where code was missing coded chain. No signal was produced. This made each coded tone come as a sharp peak. The peaks were distinctly perceived by the ear as a shrill hiss, appearing at the sampling rate (some 15,000 times a second). In addition, the coded sequence produced ”interrupted” peaks. An abnormality. In the real world, sound is continuous. Real sound is not a series of tight peaks with sharp interruptions. The CD format was not yet perfect!

The strange “ultra-hiss” was a white noise, the results of interrupted spiked pitch peaks. The overall abrasive “sonic envelope” produced the “hurt” which many felt in their ears. Designers dutifully modified the tone generators, artificially producing smooth harmonic continuity between pitch peaks interruptions. This second “system adjustment” was announced by the Industry as necessary in the “new and experimental development” of commercial digital.

The adjustment managed to soften the CD sound somewhat But the sound was still “different” to many listeners who tenacious clung to their analog technology after making the comparison. Analog still “had something” which digital clearly “lacked”. Listeners could discern several “missing” features for which no official lexicon expressed, or was permitted expression.

LITTLE STATUES

A defined and persistent “cold hardness” continued to pervade the overall tone of musical performances from CD’s, a problem for which no mere electronic solution existed. Engineers pointed out that digital sounds were now graphically detailed, crystal clear, harmonically accurate, soft on the ears, and noise-free. They emphatically stated that there could be NO criticism. Digital sound was “perfect”. SONY staff writers proliferated their banter in the public media forum, implying that anti-digital commentary was pseudo-scientific, tantamount to acknowledging stupidity. The sounds, the signals, they declared were “perfect”. But CD reproductions were nothing like the promised “live performance” feel which the hype originally promised.