Music-Remixing Preferences of Prelingual and Postlingual Cochlear Implant Users

193497-Thumbnail Image.png
Description
The poor spectral and temporal resolution of cochlear implants (CIs) limit their users’ music enjoyment. Remixing music by boosting vocals while attenuating spectrally complex instruments has been shown to benefit music enjoyment of postlingually deaf CI users. However, the effectiveness

The poor spectral and temporal resolution of cochlear implants (CIs) limit their users’ music enjoyment. Remixing music by boosting vocals while attenuating spectrally complex instruments has been shown to benefit music enjoyment of postlingually deaf CI users. However, the effectiveness of music remixing in prelingually deaf CI users is still unknown. This study compared the music-remixing preferences of nine postlingually deaf, late-implanted CI users and seven prelingually deaf, early-implanted CI users, as well as their ratings of song familiarity and vocal pleasantness. Twelve songs were selected from the most streamed tracks on Spotify for testing. There were six remixed versions of each song: Original, Music-6 (6-dB attenuation of all instruments), Music-12 (12-dB attenuation of all instruments), Music-3-3-12 (3-dB attenuation of bass and drums and 12-dB attenuation of other instruments), Vocals-6 (6-dB attenuation of vocals), and Vocals-12 (12-dB attenuation of vocals). It was found that the prelingual group preferred the Music-6 and Original versions over the other versions, while the postlingual group preferred the Vocals-12 version over the Music-12 version. The prelingual group was more familiar with the songs than the postlingual group. However, the song familiarity rating did not significantly affect the patterns of preference ratings in each group. The prelingual group also had higher vocal pleasantness ratings than the postlingual group. For the prelingual group, higher vocal pleasantness led to higher preference ratings for the Music-12 version. For the postlingual group, their overall preference for the Vocals-12 version was driven by their preference ratings for songs with very unpleasant vocals. These results suggest that the patient factor of auditory experience and stimulus factor of vocal pleasantness may affect the music-remixing preferences of CI users. As such, the music-remixing strategy needs to be customized for individual patients and songs.
Date Created
2024
Agent

The Mechanisms of Auditory Training with Cochlear Implant Simulations

193496-Thumbnail Image.png
Description
Cochlear implants (CIs) restore hearing to nearly one million individuals with severe-to-profound hearing loss. However, with limited spectral and temporal resolution, CI users may rely heavily on top-down processing using cognitive resources for speech recognition in noise, and change the

Cochlear implants (CIs) restore hearing to nearly one million individuals with severe-to-profound hearing loss. However, with limited spectral and temporal resolution, CI users may rely heavily on top-down processing using cognitive resources for speech recognition in noise, and change the weighting of different acoustic cues for pitch-related listening tasks such as Mandarin tone recognition. While auditory training is known to improve CI users’ performance in these tasks as measured by percent correct scores, the effects of training on cue weighting, listening effort, and untrained tasks need to be better understood, in order to maximize the training benefits. This dissertation addressed these questions by training normal-hearing (NH) listeners listening to CI simulation. Study 1 examined whether Mandarin tone recognition training with enhanced amplitude envelope cues may improve tone recognition scores and increase the weighting of amplitude envelope cues over fundamental frequency (F0) contours. Compared to no training or natural-amplitude-envelope training, enhanced-amplitude-envelope training increased the benefits of amplitude envelope enhancement for tone recognition but did not increase the weighting of amplitude or F0 cues. Listeners attending more to amplitude envelope cues in the pre-test improved more in tone recognition after enhanced-amplitude-envelope training. Study 2 extended Study 1 to compare the generalization effects of tone recognition training alone, vowel recognition training alone, and combined tone and vowel recognition training. The results showed that tone recognition training did not improve vowel recognition or vice versa, although tones and vowels are always produced together in Mandarin. Only combined tone and vowel recognition training improved sentence recognition, showing that both suprasegmental (i.e., tones) and segmental cues (i.e., vowels) were essential for sentence recognition in Mandarin. Study 3 investigated the impact of phoneme recognition training on listening effort of sentence recognition in noise, as measured by a dual-task paradigm, pupillometry, and subjective ratings. It was found that phoneme recognition training improved sentence recognition in noise. The dual-task paradigm and pupillometry indicated that from pre-test to post-test, listening effort reduced in the control group without training, but remained unchanged in the training group. This suggests that training may have motivated listeners to stay focused on the challenging task of sentence recognition in noise. Overall, non-clinical measures such as cue weighting and listening effort can enrich our understanding of the training-induced perceptual and cognitive effects, and allow us to better predict and assess the training outcomes.
Date Created
2024
Agent

Examining Corrective Responses and Adaptive Responses to Formant Perturbations

193407-Thumbnail Image.png
Description
The ability to detect and correct errors during and after speech production is essential for maintaining accuracy and avoiding disruption in communication. Thus, it is crucial to understand the basic mechanisms underlying how the speech-motor system evaluates different errors and

The ability to detect and correct errors during and after speech production is essential for maintaining accuracy and avoiding disruption in communication. Thus, it is crucial to understand the basic mechanisms underlying how the speech-motor system evaluates different errors and correspondingly corrects them. This study aims to explore the impact of three different features of errors, introduced by formant perturbations, on corrective and adaptive responses: (1) magnitude of errors, (2) direction of errors, and (3) extent of exposure to errors. Participants were asked to produce the vowel /ε/ in the context of consonant-vowel-consonant words. Participant-specific formant perturbations were applied for three magnitudes of 0.5, 1, 1.5 along the /ε-æ/ line in two directions of simultaneous F1-F2 shift (i.e., shift in the ε-æ direction) and shift to outside the vowel space. Perturbations were applied randomly in a compensation paradigm, so each perturbed trial was preceded and succeeded by several unperturbed trials. It was observed that (1) corrective and adaptive responses were larger for larger magnitude errors, (2) corrective and adaptive responses were larger for errors in the /ε-æ/ direction, (3) corrective and adaptive responses were generally in the /ε-ɪ/ direction regardless of perturbation direction and magnitude, (4) corrective responses were larger for perturbations in the earlier trials of the experiment.
Date Created
2024
Agent

A Mixed Reality Platform for Systematic Investigation of the Neural Mechanisms of Multisensory Integration During Motor Planning

187872-Thumbnail Image.png
Description
Multisensory integration is the process by which information from different sensory modalities is integrated by the nervous system. This process is important not only from a basic science perspective but also for translational reasons, e.g., for the development of closed-loo

Multisensory integration is the process by which information from different sensory modalities is integrated by the nervous system. This process is important not only from a basic science perspective but also for translational reasons, e.g., for the development of closed-loop neural prosthetic systems. A mixed virtual reality platform was developed to study the neural mechanisms of multisensory integration for the upper limb during motor planning. The platform allows for selection of different arms and manipulation of the locations of physical and virtual target cues in the environment. The system was tested with two non-human primates (NHP) trained to reach to multiple virtual targets. Arm kinematic data as well as neural spiking data from primary motor (M1) and dorsal premotor cortex (PMd) were collected. The task involved manipulating visual information about initial arm position by rendering the virtual avatar arm in either its actual position (veridical (V) condition) or in a different shifted (e.g., small vs large shifts) position (perturbed (P) condition) prior to movement. Tactile feedback was modulated in blocks by placing or removing the physical start cue on the table (tactile (T), and no-tactile (NT) conditions, respectively). Behaviorally, errors in initial movement direction were larger when the physical start cue was absent. Slightly larger directional errors were found in the P condition compared to the V condition for some movement directions. Both effects were consistent with the idea that erroneous or reduced information about initial hand location led to movement direction-dependent reach planning errors. Neural correlates of these behavioral effects were probed using population decoding techniques. For small shifts in the visual position of the arm, no differences in decoding accuracy between the T and NT conditions were observed in either M1 or PMd. However, for larger visual shifts, decoding accuracy decreased in the NT condition, but only in PMd. Thus, activity in PMd, but not M1, may reflect the uncertainty in reach planning that results when sensory cues regarding initial hand position are erroneous or absent.
Date Created
2023
Agent

Diffusion Tensor Imaging of Parkinson’s Disease Patients and Their Cognitive Assessments

168768-Thumbnail Image.png
Description
Diffusion Tensor Imaging may be used to understand brain differences within PD. Within the last couple of decades there has been an explosion of learning and development in neuroimaging techniques. Today, it is possible to monitor and track where a

Diffusion Tensor Imaging may be used to understand brain differences within PD. Within the last couple of decades there has been an explosion of learning and development in neuroimaging techniques. Today, it is possible to monitor and track where a brain is needing blood during a specific task without much delay such as when using functional Magnetic Resonance Imaging (fMRI). It is also possible to track and visualize where and at which orientation water molecules in the brain are moving like in Diffusion Tensor Imaging (DTI). Data on certain diseases such as Parkinson’s Disease (PD) has grown considerably, and it is now known that people with PD can be assessed with cognitive tests in combination with neuroimaging to diagnose whether people with PD have cognitive decline in addition to any motor ability decline. The Montreal Cognitive Assessment (MoCA), Modified Semantic Fluency Test (MSF) and Mini-Mental State Exam (MMSE) are the primary tools and are often combined with fMRI or DTI for diagnosing if people with PD also have a mild cognitive impairment (MCI). The current thesis explored a group of cohort of PD patients and classified based on their MoCA, MSF, and Lexical Fluency (LF) scores. The results indicate specific brain differences in whether PD patients were low or high scorers on LF and MoCA scores. The current study’s findings adds to the existing literature that DTI may be more sensitive in detecting differences based on clinical scores.
Date Created
2022
Agent

Response Accuracy and Response Time in Multisensory Localization

168345-Thumbnail Image.png
Description
Spatial awareness (i.e., the sense of the space that we are in) involves the integration of auditory, visual, vestibular, and proprioceptive sensory information of environmental events. Hearing impairment has negative effects on spatial awareness and can result in deficits in

Spatial awareness (i.e., the sense of the space that we are in) involves the integration of auditory, visual, vestibular, and proprioceptive sensory information of environmental events. Hearing impairment has negative effects on spatial awareness and can result in deficits in communication and the overall aesthetic experience of life, especially in noisy or reverberant environments. This deficit occurs as hearing impairment reduces the signal strength needed for auditory spatial processing and changes how auditory information is combined with other sensory inputs (e.g., vision). The influence of multisensory processing on spatial awareness in listeners with normal, and impaired hearing is not assessed in clinical evaluations, and patients’ everyday sensory experiences are currently not directly measurable. This dissertation investigated the role of vision in auditory localization in listeners with normal, and impaired hearing in a naturalistic stimulus setting, using natural gaze orienting responses. Experiments examined two behavioral outcomes—response accuracy and response time—based on eye movement in response to simultaneously presented auditory and visual stimuli. The first set of experiments examined the effects of stimulus spatial saliency on response accuracy and response time and the extent of visual dominance in both metrics in auditory localization. The results indicate that vision can significantly influence both the speed and accuracy of auditory localization, especially when auditory stimuli are more ambiguous. The influence of vision is shown for both normal hearing- and hearing-impaired listeners. The second set of experiments examined the effect of frontal visual stimulation on localizing an auditory target presented from in front of or behind a listener. The results show domain-specific effects of visual capture on both response time and response accuracy. These results support previous findings that auditory-visual interactions are not limited by the spatial rule of proximity. These results further suggest the strong influence of vision on both the processing and the decision-making stages of sound source localization for both listeners with normal, and impaired hearing.
Date Created
2021
Agent

Marmoset Calls Labeling

Description
Callithrix jacchus, also known as a common marmoset, is native to the new world. These marmosets possess a wide range of vocal repertoire that is interesting to observe for the purpose of understanding their group communication and their fight or

Callithrix jacchus, also known as a common marmoset, is native to the new world. These marmosets possess a wide range of vocal repertoire that is interesting to observe for the purpose of understanding their group communication and their fight or flight responses to the environment around them. In this project, I am continuing with the project that a previous student, Jasmin, had done to find more data for her study. For the most part, my project entailed recording and labeling the marmoset’s calls into different types.
Date Created
2021-05
Agent

Binaural Beats and Their Interaction Within the Superior Olivary Complex

161435-Thumbnail Image.png
Description
This study focuses on the properties of binaural beats (BBs) compared to Monaural beats (MBs) and their steady-state response at the level of the Superior Olivary Complex (SOC). An auditory nerve stimulator was used to simulate the response of the

This study focuses on the properties of binaural beats (BBs) compared to Monaural beats (MBs) and their steady-state response at the level of the Superior Olivary Complex (SOC). An auditory nerve stimulator was used to simulate the response of the SOC. The simulator was fed either BBs or MBs stimuli to compare the SOC response. This was done for different frequencies at twenty, forty, and sixty hertz for comparison of the SOC response envelopes. A correlation between the SOC response envelopes for both types of beats and the waveform resulting from adding two tones together was completed. The highest correlation for BBs was found to be forty hertz and for MBs it was sixty hertz. A Fast Fourier Transform (FFT) was also completed on the stimulus envelope and the SOC response envelopes. The FFT was able to show that within the BBs presentation the envelopes of the original stimuli showed no difference frequency. However, the difference frequency was present in the binaural SOC response envelope. For the MBs, the difference frequency was present within the stimulus and the monaural SOC response envelope.
Date Created
2021
Agent