The periodical Nature reports on research by a social psychologist which shows that judgements about the quality of a musical performance are influenced more by what is see than by what is heard. The remit of the somewhat superficial research was the impact of body language, which means it did not consider a little-known but far more important link between the eyes and sound. It has long been a puzzle as to why high-order harmonics extending beyond the upper limit of human hearing produced by fine instruments such as Stradivarius violins make the music sound better. Similarly there has been no explanation as to why extending the frequency response of an audio system beyond the upper limit of human hearing improves the sound quality. But recent medical research has shown that our eyes are sound as well as vision transducers, and that the eyes play an important role in passing ultrasound to the brain. While the upper limit of human hearing ranges from 15 to 18 khz depending on age, the frequency response of the eye extends beyond 50 kHz - see graph above. In these ultrasonic regions the eye is not producing conventional sounds but is feeding sensory information to the brain which becomes a key part of the cognitive process.
The role of the eye as a sound transducer is medically proven, here is a link to research published in The International Tinnitus Journal. These findings open up many paths which this post can only hint at. Although top end audio systems have frequency responses that extends beyond 20 kHz, they come nowhere near matching the almost flat response to 50 kHz reported in the referenced article. Which may explain why even the best audio systems never quite seem to replicate the experience of live music. But when it comes to the ubiquitous compressed audio file formats such as MP3, the frequency response is further curtailed. Which may explain why classical music fails to connect with the MP3 generation. And that is before we factor in that headphones have become the default way of listening to music, and headphones remove the eyes completely from the listening process. While returning to live music, ultrasound is highly directional; which may explain why watching a performer closely seems to enhance the music.
Another fascinating possibility hinted at by this research is that John Cage's 4' 33'' is in fact an 'ultrasound symphony', with the absence of conventional musical sounds allowing the brain to focus on ambient ultrasound. And the concept of the eye as a multi-media transducer cross-references to a recent post on how cats can switch from one channel (hearing) to another processing track (sight). This action is scientifically described as synaesthesia and is the amalgamation of different sensory channels which usually function quite separately. Paths converge here as in the most common form of human synaesthesia sounds are perceived as images. So does the discovery that the eye is as an audio transducer explain why synaesthesia is common among musicians?
But most importantly - and I do think this is important - the Nyquist theorem, which is used to determine the sampling rate for digital audio formats, states that the maximum frequency that can be represented at any given sampling rate is half the sampling rate. Which is why CDs use a 44.1 kHz sampling rate, because that gives a frequency response extending beyond the limits of conventional hearing to 22.05 kHz. But research now shows that the brain responds to ultrasound beyond 50 kHz; so the data cut-off at 22.05 kHz may explain the perceived shortcomings of digital audio. And the absence of a 22.05 kHz cut-off in analogue LPs may explain why vinyl is making a comeback.
This extract from the conclusions to Martin Lenhardt's paper for The International Tinnitus Journal opens up a wealth of possibilities:
In regard to music recording and reproduction, more than doubling the sampling rate (95 kHz/24 bits) will extend the audible frequency range that can be coded in the eighth nerve and will result in a gain in linearity and reduction in quantizing errors, factors that will improve music quality.I came Martin Lenhardt's research paper while exploring the link between audio file format and sound quality. Inevitably my summary is simplistic, but further research on the role of ultrasound in music listening may help us understand why classical music is all too often lost in transmission.
Personal headphones could be supplemented or replaced with bone conduction transducers, with frequency responses extending to at least 50 kHz. Such transducers are already in use for medical treatment of tinnitus and can be readily modified for personal musical use (see Fig. 4).
Musical harmonic information is coded by place on the basilar membrane and temporally in neural firing. Ultrasound might contribute to the musical harmonic structure and provide more high-frequency treble emphasis in instruments, such as the cymbals, triangles, trumpets, violins, and oboes.
* Part two of this post How classical music was covertly dumbed down is now available.
Also on Facebook and Twitter. Any copyrighted material on these pages is included as "fair use", for the purpose of study, review or critical analysis only, and will be removed at the request of copyright owner(s).