Many researchers believe that music and language are deeply connected in the brain. However, whether music and language processing rely on exactly the same neural resources is an important and fascinating area of inquiry in the neuroscience of music.
Language and music have a lot in common. One such similarity is in the structures of speech and music: both can be described as complex sound signals composed of discrete elements according to a set of rules. Because of this similarity, it would make sense that their processing might be closely integrated in the brain. This integration could also explain why musical training appears to enhance language skills and why music therapy is showing great promise in the treatment of language disorders such as dyslexia. The connections between music and language seem so extensive, that some authors have gone so far as to speculate that speech might have actually evolved from music.
However, there is also evidence of dissimilarities between music and speech in the brain. Most importantly, studies in patients with brain damage have found that some patients have great difficulties with language processing but relatively intact music perception skills, while other patients show the opposite pattern. In neuropsychology such patterns of symptoms are known as a double dissociation, and considered as strong evidence for distinct neural mechanisms for two skills (music and language processing in this case).
One mystery that has puzzled neuroscientists is that despite the convincing evidence for music and language selectivity obtained from patients, very few functional magnetic resonance imaging (fMRI) studies in healthy subjects have found music- and language-specific responses in the auditory processing areas of the brain. Until now.
A study by researchers Norman-Haignere, Kanwisher, and McDermott published in the prestigious journal Neuron suggests that speech and music are processed in distinct regions in the auditory cortex, the area of the outer layer of the brain responsible for sound processing. In this paper, the authors suggest that one reason why music and speech selectivity in the auditory cortex has so far remained elusive in fMRI studies stems from the fact that each voxel - the tiny three dimensional pixels that constitute the MRI image - contains a large number of brain cells. If speech- and music-specific cells happen to reside in such close vicinity that they co-occupy the same voxel, the standard way of analyzing fMRI data would not be able to tell these neural populations apart.
In the study, the authors presented their subjects with a diverse set of natural sounds including music and speech and used sophisticated signal analysis methods to identify distinct responses of different groups of brain cells occupying each voxel to these different sound stimuli. In other words, this novel analysis approach enabled the researchers to look for groups of neurons that preferentially respond to either music or speech.
Indeed, one set of neurons were tuned to respond to English and foreign speech and to a lesser extent to music with lyrics. Another set, in turn, was highly responsive to music but less responsive to speech. Importantly, the music- and speech-selective neurons were located in different brain areas. Additional analyses indicated that the selectivity of these neuron groups for speech and music sounds did not simply reflect obvious acoustic differences between the music and speech sounds - for example whether or not the sounds contained rapid changes in pitch. This means that there seem to be regions in the auditory areas of the brain that are devoted to the processing of distinct sound categories such as music and speech.
The results of this study might explain why music processing is sometimes selectively impaired after brain damage. The study also demonstrates that the limitations of fMRI – the workhorse of modern brain imaging – can be circumvented with clever use of novel analysis methods. As the one of the fathers of cognitive neuroscience, Michael Gazzaniga, says in his recent memoirs, much of the low-hanging fruit of cognitive neuroscience might have already been picked. Application of novel methods is crucial for furthering our understanding of how music and speech are processed in the brain, and more generally of how the mind emerges from the functioning of brain matter.
written by ketki karanam
Gazzinga, S., Michael. (2015). Tales from both sides of the brain: a life in neuroscience. Choice Reviews Online, 52(12), 52–6375–52–6375. doi:10.5860/choice.190186
Mithen, S., Morley, I., Wray, A., Tallerman, M., & Gamble, C. (2006). The Singing Neanderthals: the Origins of Music, Language, Mind and Body , by Steven Mithen. London: Weidenfeld & Nicholson, 2005. ISBN 0-297-64317-7 hardback £20 & US$25.2; ix+374 pp. Cam. Arch. Jnl, 16(01), 97. doi:10.1017/s0959774306000060
Norman-Haignere, S., Kanwisher, N. G., & McDermott, J. H. (2015). Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition. Neuron, 88(6), 1281–1296. doi:10.1016/j.neuron.2015.11.035
Patel, A. D. (2003). Language, music, syntax and the brain. Nat Neurosci, 6(7), 674–681. doi:10.1038/nn1082
Peretz, I., & Coltheart, M. (2003). Modularity of music processing. Nat Neurosci, 6(7), 688–691. doi:10.1038/nn1083