IMAGINE sitting in a noisy restaurant, across the table from a friend, having a conversation as you eat your meal. To communicate effectively in this situation, you have to extract the relevant information from the noise in the background, as well as from other voices. To do so, your brain somehow “tags” the predictable, repeating elements of the target signal, such as the pitch of your friend’s voice, and segregates them from other signals in the surroundings, which fluctuate randomly.
The ability to focus on your friend’s voice while excluding other noises is commonly referred to as the cocktail party effect. Although first described more than 50 years ago, the brain mechanisms involved are unknown. But a new study by researchers at Northwestern University now shows that activity in regions of the brainstem are modulated by specific characteristics of the speaker’s voice, and that this modulation is impaired in children with dyslexia.
Animal experiments have shown that auditory regions of the brainstem such as the inferior colliculus, are involved in the processing of sound signals within noisy environments. These structures receive inputs from the cerebral cortex, which are thought to amplify relevant information in the sound signal while inhibiting irrelevant information, thus increasing the signal-to-noise ratio.The activity of brainstem neurons is known to be dynamic and modulated by experience. The new work shows that this modulation occurs online during speech perception, instead of on a longer timescale.
Bharath Chandrasekaran and his colleagues at Northwestern’s Auditory Neuroscience Laboratory developed a non-invasive method for recording the electrical activity of the brainstem, in order to investigate whether responses to auditory stimuli are modulated by the context of speech. In the first experiment, 21 children without neurological abnormalities or learning disabilities were played a synthesized speech syllable (“da”) while they watched a video of their choice. The syllable was presented in either a repetitive and predictable manner, or in a highly variable and unpredictable one.
The response of the auditory brainstem was found to be dependent upon the context in which the speech sound was played, such that the neural representation of the sound became fine-tuned to the repetitive syllable, but not the variable one. Repetition of the syllable was found to induce plasticity in the brainstem, so that the response is automatically sharpened to elements of the signal related to voice pitch. This modulation is crucial for the ability to perceive speech in a noisy environment, because pitch is a characteristic which is used to distinguish between different voices. The adaptation of the brainstem response underlies the listener’s ability to tag the speaker’s voice, and to segregate it from background noise.
The same experiment was then repeated, but this time the children were divided into two groups of 15, according to their reading ability as defined by a standardized word reading efficiency test. The “good readers” group consisted of 15 children from the first experiment, all of whom had scored 115 or more on the reading test. The “bad readers” group consisted of 15 others, who had obtained scores below 85, had previously been diagnosed by a physician with having a learning impairment, and attended a private school for the learning disabled. In the “good readers” group, the response of the auditory brainstem was again found to be modulated by the repetitive sound. However, no adaptation of brainstem activity was observed in the group of poor readers.
Earlier behavioural studies suggested that a core deficit of developmental dyslexia is the inability to exclude background noise from the incoming stream of auditory information. The new work confirms this, and shows that the inability arises because neurons in the auditory brainstem do not fine-tune their responses to speech cues. As a result, dyslexic children apparently cannot filter out background noise, and so have difficulty paying attention in the noisy classroom environment. The new findings suggest that they would benefit from sitting at the front of the room or wearing noise-reducing headphones to help them concentrate, and may provide a new way of diagnosing the condition. The researchers are also investigating the possibility that musical training might improve speech-in-noise perception.
- Surgery on conscious patients reveals sequence & timing of speech processing
- Facial expressions modulate speech perception
Chandrasekaran, B., et al. (2009). Context-Dependent Encoding in the Human Auditory Brainstem Relates to Hearing Speech in Noise: Implications for Developmental Dyslexia. Neuron 64: 311-319. DOI: 10.1016/j.neuron.2009.10.006.