Evaluating the Neural Correlates of Auditory Attention During Speech Perception

Description

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of auditory attention to achieve this same goal with noninvasive electroencephalographic (EEG) methods. Five human participants participated in an auditory attention task. Participants listened to a series of four syllables followed by a fifth syllable (probe syllable). Participants were instructed to indicate whether or not the probe syllable was one of the four syllables played immediately before the probe syllable. Trials of this task were separated into conditions of playing the syllables in silence (Signal) and in background noise (Signal With Noise), and both behavioral and EEG data were recorded. EEG signals were analyzed with event-related potential and time-frequency analysis methods. The behavioral data indicated that participants performed better on the task during the “Signal” condition, which aligns with the challenges demonstrated in the cocktail party effect. The EEG analysis showed that the alpha band’s (9-13 Hz) inter-trial coherence could potentially indicate characteristics of the attended speech signal. These preliminary results suggest that EEG time-frequency analysis has the potential to reveal the neural signatures of auditory attention, which may allow for the design of a noninvasive, EEG-based hearing aid.

Date Created
2023-05
Agent

Effects of Error-Detection Training on Speech Motor Learning

165644-Thumbnail Image.png
Description

When we produce speech movements, we expect a specific auditory consequence, but an error occurs when the predicted outcomes do not match the actual speech outcome. The brain notes these discrepancies, learns from the errors, and works to lower these

When we produce speech movements, we expect a specific auditory consequence, but an error occurs when the predicted outcomes do not match the actual speech outcome. The brain notes these discrepancies, learns from the errors, and works to lower these errors. Previous studies have shown a relationship between speech motor learning and auditory targets. Subjects with smaller auditory targets were more sensitive to errors. These subjects estimated larger perturbations and generated larger responses. However, these responses were often ineffective, and the changes were usually minimal. The current study examined whether subjects’ auditory targets can be manipulated in an experimental setting. We recruited 10 healthy young adults to complete a perceptual vowel categorization task. We developed a novel procedure where subjects heard different auditory stimuli and reported the stimuli by locating the stimuli relative to adjacent vowels. We found that when stimuli are closer to vowel boundary, subjects are less accurate. Importantly, by providing visual feedback to subjects, subjects were able to improve their accuracy of locating the stimuli. These results indicated that we might be able to improve subjects’ auditory targets and thus may improve their speech motor learning ability.

Date Created
2022-05
Agent